Search

CN-121999201-A - Method and system for detecting small target of secondary terminal of current transformer based on improvement YOLOv10

CN121999201ACN 121999201 ACN121999201 ACN 121999201ACN-121999201-A

Abstract

The invention belongs to the technical field of electric power material detection high-voltage electric tests, and particularly relates to a method and a system for detecting a small target of a secondary terminal of a current transformer based on an improvement YOLOv. According to the invention, based on YOLOv backbone architecture, backbone network replacement, head expansion and loss function optimization are performed aiming at special detection requirements of the secondary terminal of the current transformer, and meanwhile, under the condition of keeping the weight of a model, the feature extraction capability is enhanced by adding sampling feature points, so that high-precision and real-time detection of a small target of the secondary terminal is realized, reliable target positioning and type identification support is provided for automatic butt joint connection of a mechanical arm, the mechanical arm can automatically complete the butt joint connection of the secondary terminal, the traditional manual operation is replaced, and the whole-course unmanned high-voltage electric test of distribution network materials such as a high-voltage current transformer is promoted to be realized.

Inventors

  • DAI JIANZHUO
  • LI CHENGGANG
  • WANG JIAN
  • YANG WEIXING
  • HAN TINGWEI
  • ZHU JINWEI
  • WANG YAGUANG
  • Mao Danchen
  • CHEN YUTONG
  • CHU ZHAOJIE
  • LI JIANSHENG
  • CHEN JIE
  • TAO JIAGUI

Assignees

  • 江苏省电力试验研究院有限公司
  • 国网江苏省电力有限公司电力科学研究院

Dates

Publication Date
20260508
Application Date
20260127

Claims (10)

  1. 1. The current transformer secondary terminal small target detection system based on the improvement YOLOv is characterized by comprising an image acquisition module, a data preprocessing module, a model training module, a target detection module and a docking control module; the image acquisition module is used for acquiring an image of a secondary terminal of the current transformer; The data preprocessing module is used for receiving the secondary terminal image generated by the image acquisition module, preprocessing the image, labeling the preprocessed image and constructing a data set; The model training module is used for receiving the data set generated by the data preprocessing module and constructing a secondary terminal small target detection model based on the improved YOLOv model; The target detection module is used for receiving the image to be detected generated by the image acquisition module and the secondary terminal small target detection model generated by the model training module, and inputting the image to be detected into the secondary terminal small target detection model so as to generate position coordinates and type information of the secondary terminal; The butt joint control module comprises a controller and a mechanical arm driving unit, wherein the controller receives position coordinates and type information of the secondary terminal generated by the target detection module and generates a control instruction according to the position coordinates and type information, and the mechanical arm driving unit receives the control instruction and drives the mechanical arm to move according to a preset path to finish automatic butt joint connection with the secondary terminal.
  2. 2. The method for detecting the small target of the secondary terminal of the current transformer based on the improvement YOLOv is implemented based on the system for detecting the small target of the secondary terminal of the current transformer according to claim 1, and is characterized by comprising the following steps: S1, acquiring images of a secondary terminal of a current transformer, and constructing a detection data set of the secondary terminal of the current transformer; S2, improving an original YOLOv model, wherein the improvement comprises the steps of S21 replacing a backbone network, S22 increasing and decreasing a head network, S23 increasing sampling characteristic points and S24 optimizing a loss function; S3, constructing an improved YOLOv model according to the improved content of the step S2; s4, inputting the data set obtained in the step S1 into an improved YOLOv model constructed in the step S3, configuring model training parameters, performing iterative training on the improved YOLOv model, performing performance verification, and obtaining a secondary terminal small target detection model after verification is passed; S5, acquiring an image to be detected of a secondary terminal of the current transformer by adopting a double-phase combined die assembly, and inputting the image to be detected into the secondary terminal small target detection model obtained in the step S4 to obtain the position coordinates and the interface type of a secondary terminal interface; And S6, generating a control instruction according to the position coordinates of the secondary terminal interface and the interface type information obtained in the step S5 so as to drive the mechanical arm to move according to a preset path, and completing automatic butt joint connection with the secondary terminal.
  3. 3. The method for detecting the small target of the secondary terminal according to claim 2, wherein in the step S1, the image in the secondary terminal detection dataset of the current transformer is marked with a bounding box and a class label, and is divided into a training set, a verification set and a test set according to the ratio of 8:1:1, and the secondary terminal detection dataset of the current transformer comprises current transformer secondary terminal images with different illumination, working conditions and shapes.
  4. 4. The method for detecting a small target of a secondary terminal according to claim 3, wherein in the step S2, the method for replacing the backbone network with S21 is to replace YOLOv a 10 backbone network with MobileNetV a lightweight architecture, and the MobileNetV module includes a deep separable convolution and SE channel attention mechanism, and the specific flow is as follows: S211, performing 1×1 convolution operation on the input secondary terminal image, and increasing the number of channels of the feature map; s212, performing deep convolution operation in the high-dimensional space after the channel is added, and extracting spatial features such as the position, the outline and the like of the secondary terminal; s213, weighting and optimizing the feature map generated by the depth convolution by using an SE attention mechanism; s214, carrying out 1X 1 convolution on the feature map after SE attention mechanism optimization, wherein the 1X 1 convolution adopts a linear activation function, so that the number of channels of the feature map is reduced; S215, when the step size is 1, residual connection is carried out on the input image and the characteristic diagram generated through 1X 1 convolution, and when the step size is 2, downsampling is carried out on the characteristic diagram.
  5. 5. The method for detecting a small target of a secondary terminal according to claim 4, wherein in the step S2, the method for increasing or decreasing the header network in S22 is as follows: Deleting a large target detection module in original YOLOv, reserving a middle target detection module and a small target detection module, adding an ultra-small target detection module, wherein the middle target detection module generates a 40×40 characteristic diagram through 16 times downsampling, the small target detection module generates an 80×80 characteristic diagram through 8 times downsampling, and the ultra-small target detection module generates a 160×160 characteristic diagram through 4 times downsampling.
  6. 6. The secondary terminal small target detection method according to claim 5, wherein in the step S2, the method of increasing the sampling feature point in S23 is as follows: setting 5 additional sampling feature points to obtain accurate terminal features, wherein the additional sampling feature points comprise a first sampling feature point, a second sampling feature point, a third sampling feature point, a fourth sampling feature point and a fifth sampling feature point; the first sampling characteristic point is positioned at the geometric center of the secondary terminal so as to position the position coordinate of the integral secondary terminal; the second sampling characteristic point is positioned at the inflection point of the left upper corner edge of the secondary terminal and is used for identifying the outline starting point of the secondary terminal and distinguishing the secondary terminal from background interference; The third sampling characteristic point is positioned at the inflection point of the right lower corner edge of the secondary terminal and is used for generating a rectangular outline of the secondary terminal together with the second sampling point, so that the aspect ratio of the rectangular outline is calculated; The fourth sampling characteristic point is positioned at the geometric center of the wiring hole at the top of the secondary terminal; and the fifth sampling characteristic point is positioned at a connecting inflection point of the bottom pin of the secondary terminal and the terminal main body.
  7. 7. The method for detecting a small target of a secondary terminal according to claim 6, wherein in the step S2, the method for optimizing the loss function in S24 is as follows: Optimizing IoU, adding a corner distance constraint and a center Manhattan distance constraint, wherein the specific steps comprise: s241, inputting the predicted frame coordinates , , , ) Coordinates with the target frame , , , Wherein , ) To predict the left upper corner coordinates of the frame , ) In order to predict the lower right corner coordinates of the frame, , For the upper left corner of the target frame, , ) The lower right corner coordinate of the target frame; s242, setting the coordinates of the intersection area as% , , , ), And (3) with Respectively get% , ) And% , ) The maximum value of the number of the times, And (3) with Respectively get% , ) And% , ) Calculating the intersection area I of the prediction frame and the target frame: , Wherein, the For the width of the intersection region, Is the intersection area height; S243, calculating a union region area U of the prediction frame and the target frame: , Wherein, the In order to predict the frame width of the frame, In order to predict the height of the frame, For the width of the target frame to be the same, The height of the target frame; S244, calculating the square sum of angular point distances between the prediction frame and the target frame: , s245, normalizing the square of the diagonal distance of the minimum closed bounding box, which is the sum of the square of the euclidean diagonal distances of the minimum closed bounding box and the calculation formula, so that the loss value is not affected by the image scale or the bounding box size: , Wherein, the In order to close the width of the frame, The height of the sealing frame is set; S246, calculating Manhattan distances of the target frame center point and the prediction frame center point: , , The optimized IoU loss function calculation formula is as follows: 。
  8. 8. The method for detecting a small target at a secondary terminal according to claim 7, wherein in the step S3, the modified YOLOv model includes a backbone network, a neck network, and a head network, and the specific construction steps are as follows: S31, constructing a backbone network, wherein the backbone network comprises an initial convolution layer and a third edition MobileNetV module of a first mobile terminal convolution neural network to a sixth mobile terminal convolution neural network, extracting features of an image through the 3X 3 initial convolution layer to obtain a preliminary feature map, and sequentially extracting depth features of the preliminary feature map by using the first module to the sixth MobileNetV module to generate multi-scale intermediate features; S32, constructing a neck network, wherein the neck network comprises first to fourth feature fusion modules marked as F1, F2, F3 and F4, first to fourth ghost convolutions marked as G1, G2, G3 and G4, first to second up-sampling modules marked as U1 and U2, first to fourth feature splicing modules marked as C1, C2, C3 and C4, and first to second convolution layers marked as Conv1 and Conv2; S33, constructing a head network, wherein the head network carries out different-scale target detection on the secondary terminal of the current transformer based on the characteristics of the neck network processed in the step S32, and comprises a super-small target detection module, a small target detection module and first to third double detection heads corresponding to the middle target detection module, each double detection head comprises a one-to-many detection head H1 and a one-to-one detection head H2, and the classification and regression operation of the characteristic diagrams are carried out on the H1 and H2 to finish the target detection of the secondary terminal of the current transformer.
  9. 9. The method for detecting a small target of a secondary terminal according to claim 8, wherein in the step S4, the specific steps are as follows: S41, configuring training parameters for the improved YOLOv model, wherein the training batch is set to be 16, the initial learning rate is set to be 0.001, a cosine annealing learning rate attenuation strategy is adopted, the attenuation period is 10 rounds, the training total round is set to be 150 rounds, the momentum parameters are 0.9, and the weight attenuation coefficient is 0.0005; S42, inputting the training set data acquired in the step S1 into the improved YOLOv model constructed in the step S3 according to the batch size of 16; S43, carrying out iterative training on the improved YOLOv model, carrying out loss calculation on a secondary terminal prediction result generated by the model by adopting the optimized loss function in the step S2, and iteratively updating model weight by a random gradient descent SGD optimizer based on a counter propagation calculation parameter gradient, and continuing iteration until the loss function converges; and S44, performing performance verification on the improved YOLOv model after training by adopting the verification set data obtained in the step S1, and obtaining a small target detection model of the secondary terminal, which is finally used for the secondary terminal of the current transformer for the mechanical arm to be in butt joint, after verification, if the recognition precision of the current transformer secondary terminal by the verification model meets the requirement of the mechanical arm to be in butt joint.
  10. 10. The method for detecting a small target at a secondary terminal according to any one of claims 2 to 9, wherein in the step S5, the two cameras include a global camera and a local camera, the global camera is fixed at a high-voltage electrical test detection station, a horizontal viewing angle range is set to 0 to 90 °, the local camera is mounted right above the end of the mechanical arm, and the horizontal viewing angle range is set to 0 to 40 °.

Description

Method and system for detecting small target of secondary terminal of current transformer based on improvement YOLOv10 Technical Field The invention belongs to the technical field of electric power material detection high-voltage electric tests, and particularly relates to a method and a system for detecting a small target of a secondary terminal of a current transformer based on an improvement YOLOv. Background In the high-voltage electric test of 10kV and 35kV voltage class power distribution equipment (such as a power distribution transformer, a circuit breaker, a pole-mounted switch, a high-voltage current transformer and the like), the prior art has realized the automatic operation of primary wiring and disconnection of a tested product by a robot, but the treatment of a secondary terminal and a temperature measurement line still depends on manual operation, and an operator needs to find a corresponding secondary signal line interface on the power distribution equipment first and then manually butt-joint the signal line with the terminal. The secondary signal line interfaces on the power distribution equipment are usually small in size, large in quantity and low in manual wiring efficiency, and particularly in a large-scale power distribution equipment installation or maintenance scene, a large amount of manpower and time are required to be input, operators need to operate around the power distribution equipment, and potential safety hazards such as electric shock exist. In order to improve production efficiency and ensure safe production, it is very necessary to apply an automatic wiring device to secondary wiring in the power distribution field. However, when the automatic wiring device is used for secondary wiring in the power distribution field, the technical problem of target detection is faced. The secondary terminals of distribution network materials such as current transformers are small in millimeter level, the types are non-uniform, the current transformers are partially fixed by screws and partially fixed by pins, the current transformers are often deployed in complex environments such as transformer substation switch cabinets and distribution boxes, phenomena such as uneven illumination, weak light and reflection exist, when the existing target detection algorithm processes secondary terminal targets under complex backgrounds, the existing target detection algorithm is difficult to distinguish from the background or similar targets, the detection precision is low, positioning and type recognition precision required by mechanical arm butt joint of an automatic wiring device cannot be met, and mechanical arm operation is difficult to realize. YOLO (You Only Look Once) is the most representative real-time detection algorithm framework in the field of target detection, and the framework is a deep learning model specially designed for efficiently predicting multiple bounding boxes (bounding boxes) in images and probabilities of corresponding classes, so that YOLO has remarkable advantages in processing speed, is particularly suitable for industrial scenes requiring real-time detection capability, and has performance bottlenecks when detecting small targets. In order to improve the small target detection performance, the prior art is mostly based on YOLOv, a specific module is optimized by reserving a main framework of the small target detection performance, and improvement is performed in aspects of head expansion, loss function optimization, enhancement feature extraction and the like. Chinese patent CN120014291a discloses a small target detection model optimizing method and a detection method based on YOLOv10, the improvement of the technical scheme in terms of head expansion is to add a small object detection head at the detection head, the resolution is set to 160×160, in terms of enhanced feature extraction, a multi-scale fusion structure based on a dynamic upsampler and a time-frequency domain feature extraction module is used to improve the neck network, and by extracting features in the time domain and the frequency domain, spatial and frequency information of different layers is captured. Chinese patent CN121095551A discloses an unmanned aerial vehicle aerial image multi-scale target detection method based on improvement YOLOv, wherein the improvement on head expansion is that a P2 detection layer with 160×160 size is added in a detection layer of a head network, a P5 detection layer with 20×20 size is removed, a EMASlideLoss loss function is designed in the aspect of loss function optimization, an EMA index moving average algorithm is integrated, dynamic smoothing treatment is carried out on loss values, the detection performance of a model on difficult samples is indirectly improved through a stable gradient updating process, a multi-scale edge information enhancement MEE module is designed in the aspect of enhanced feature extraction, the extraction efficiency and accuracy of target