US-12626350-B2 - Ultraviolet light and visible light fusion method for detecting power device
Abstract
The disclosure discloses an ultraviolet light and visible light fusion method for detecting a power device, which includes steps of: obtaining a visible light image and an ultraviolet light image of a region to be detected, respectively; registering the ultraviolet light image with a pre-trained registration model to obtain a registered ultraviolet light image; decomposing the visible light image and the registered ultraviolet light image by a wavelet function; fusing low-frequency components in the ultraviolet light image and the visible light image to obtain low-frequency fused components; fusing high-frequency components in the visible light image and the ultraviolet light image to obtain high-frequency fused components; performing inverse wavelet transformation on the low-frequency fused components and the high-frequency fused components to obtain an ultraviolet light and visible light fused image. The implementation of the present disclosure can improve image registration and a fusion effect, and has good real-time performance.
Inventors
- Yong Yi
- Yuwen Pan
- Jinqiao DU
- Fan Yang
- Zikang YANG
- Zhimin Li
- Yuhuan Li
- Xu Tan
Assignees
- CHONGQING UNIVERSITY
Dates
- Publication Date
- 20260512
- Application Date
- 20230427
- Priority Date
- 20221111
Claims (7)
- 1 . An ultraviolet light and visible light fusion method for detecting a power device, said method comprising the steps of: step S 10 : obtaining a visible light image of a region to be detected by using a visible light lens, and obtaining an ultraviolet light image of the region to be detected by using an ultraviolet light lens; step S 11 : registering the ultraviolet light image with a pre-trained registration model to obtain a registered ultraviolet light image; step S 12 : decomposing the visible light image and the registered ultraviolet light image by a wavelet function to obtain high-frequency components and low-frequency components of the visible light image, and high-frequency components and low-frequency components of the registered ultraviolet light image; step S 13 : fusing the low-frequency components in the ultraviolet light image and the visible light image in a weighted average way to obtain low-frequency fused components; step S 14 : performing Canny operator edge detection on the high-frequency components of the visible light image, and fusing the high-frequency components of the visible light image with the high-frequency components of the ultraviolet light image according to a maximum region energy fusion rule to obtain high-frequency fused components; and step S 15 : performing an inverse wavelet transformation on the low-frequency fused components and the high-frequency fused components, and performing reconstruction to obtain an ultraviolet light and visible light fused image.
- 2 . The method according to claim 1 , wherein the step S 12 further comprises: performing wavelet decomposition on the visible light image and the registered ultraviolet light image using a DB4 wavelet basis function.
- 3 . The method according to claim 2 , wherein the step S 13 further comprises: fusing the low-frequency components in the ultraviolet light image and the visible light image using the following formula: P L ( x , y ) = c v P Lv ( x , y ) + c u P Lu ( x , y ) , where P L (x, y) is a low-frequency coefficient of the fused image, P Lv (x, y) is a low-frequency coefficient of the visible light image, P Lu (x, y) is a low-frequency coefficient of the ultraviolet light image, Cv is a weight of the low-frequency coefficient of the visible light image, and Cu is a weight of the low-frequency coefficient of the ultraviolet light image; the Cv and Cu are obtained by pre-calibration, and Cv<Cu.
- 4 . The method according to claim 3 , wherein the step S 14 further comprises: fusing the high-frequency components in the ultraviolet light image and the visible light image using the following formula: P H ( x , y ) = { P Hv ( x , y ) E v ( x , y ) ≥ E u ( x , y ) P Hu ( x , y ) E v ( x , y ) < E u ( x , y ) , where P H (x,y) is a high-frequency coefficient of a fused image, P Hv (x,y) is a high-frequency coefficient of the visible light image, P Hu (x,y) is a high-frequency coefficient of the ultraviolet light image, and E(x,y) denotes region energy within 5 pixels by 5 pixels with (x,y) as a center.
- 5 . The method according to claim 4 , further comprising: step S 00 : pre-establishing and training a registration model wherein the registration model is an unsupervised registration network, the step S 00 comprising: step S 001 : inputting the visible light image and the ultraviolet light image to be trained to a background region module, and extracting edge features of the visible light image and the ultraviolet light image by an edge detection algorithm, respectively, to obtain an edge feature map of the visible light image and an edge feature map of the ultraviolet light image; step S 002 : inputting the edge feature map of the visible light image and the edge feature map of the ultraviolet light image into a registration network (R-Net), and obtaining a deformation field by learning from the registration network; step S 003 : applying the deformation field to the edge feature map of the ultraviolet light image by using a spatial transformation to deform the edge feature map, and obtaining a deformed edge feature of the ultraviolet light image; and step S 004 : substituting the edge feature map of the visible light image and the deformed edge feature map of the ultraviolet light image into a loss function to obtain a registration loss value, iteratively optimizing the model by minimizing the loss value, and finally outputting a trained ultraviolet/visible light registration deformation field by the background region module.
- 6 . The method according to claim 5 , wherein the step S 001 further comprises: obtaining a visible light image for training by using a visible light lens, and obtaining an ultraviolet light image for training by using an ultraviolet light lens; preprocessing the visible light image and the ultraviolet light image through a preprocessing module, the preprocessing comprising image size adjustment and information statistics; and extracting respectively the edge features of the visible light image and the ultraviolet light image which have been preprocessed by a Canny edge detection algorithm.
- 7 . The method according to claim 6 , wherein extracting the edge features of the visible light image and the ultraviolet light image which have been preprocessed by the Canny edge detection algorithm comprises: performing, by a Gaussian smoothing filter, weighted average on gray values of pixels in the visible light image and the ultraviolet light image and pixels in surrounding regions of the visible light image and the ultraviolet light image, to filter out high-frequency noise in the visible light image and the ultraviolet light image; after image denoising, calculating gradient amplitudes in horizontal and vertical directions for pixels in the visible light image and the ultraviolet light image by a Sobel operator; refining edges of the visible light image and the ultraviolet light image by non-maximum suppression; and setting a large threshold and a small threshold, to determine whether a pixel is an edge point, and determining and connecting the edge points according to the gradient amplitudes to form an outline.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This patent application is a National Stage Application, filed under 35 U.S.C. § 371, of International Application No. PCT/CN2023/091015, filed Apr. 27, 2023, which international application claims priority to and the benefit of Chinese Application No. 202211410753.0, filed Nov. 11, 2022; the contents of both of which as are hereby incorporated by reference in their entireties. BACKGROUND Technical Field The present disclosure relates to the technical field of power detection, in particular to an ultraviolet light and visible light fusion method for detecting a power device. Description of Related Art The statements in this section only provide background information related to the present disclosure and do not necessarily constitute the prior art. Corona discharge often occurs on the surfaces of transmission lines and polluted insulators, which will be accompanied by sound, light and heat effects, bringing hidden dangers to the safe operation of a power system. Therefore, it is of great significance to accurately detect the occurrence of corona and locate the position of corona in time to save energy resources and ensure the stable operation of the power system. The ultraviolet detection technology can detect a weak discharge phenomenon of faulty lines or insulators without being close to the power device and is characteristic of being long in measuring distance, non-contact, real-time, safe and reliable, which is widely used in the routine inspection of the power device. During the development of an ultraviolet imager, the registration and fusion technology of ultraviolet and visible light images is the key to realize the accurate location of the discharge position. At present, the registration and fusion methods for ultraviolet and visible light images include a gray-level registration method, an image registration method based on a camera model and perspective transformation between cameras, and an image fusion algorithm based on independent component analysis (ICA). Although these algorithms have certain effects, there are still some shortcomings such as poor registration effect, serious image information loss and poor real-time performance. BRIEF SUMMARY The technical problem to be solved by the present disclosure is that the present disclosure provides an ultraviolet light and visible light fusion method for detecting a power device, which can improve image registration and a fusion effect, and has good real-time performance. In order to solve the technical problems, the present disclosure provides an ultraviolet light and visible light fusion method for detecting a power device, including: step S10: obtaining a visible light image of a region to be detected by using a visible light lens, and obtaining an ultraviolet light image of the region to be detected by using an ultraviolet light lens;step S11: registering the ultraviolet light image with a pre-trained registration model to obtain a registered ultraviolet light image;step S12: decomposing the visible light image and the registered ultraviolet light image by a wavelet function to obtain high-frequency components and low-frequency components of the visible light image and obtain high-frequency components and low-frequency components of the registered ultraviolet light image;step S13: fusing the low-frequency components in the ultraviolet light image and the visible light image in a weighted average way to obtain low-frequency fused components;step S14: performing Canny operator edge detection on the high-frequency components of the visible light image, and fusing the high-frequency components of the visible light image with the high-frequency components of the ultraviolet light image according to a maximum region energy fusion rule to obtain high-frequency fused components; andstep S15: performing inverse wavelet transformation on the low-frequency fused components and the high-frequency fused components, and performing reconstruction to obtain an ultraviolet light and visible light fused image. Preferably, the step S12 further includes: performing wavelet decomposition on the visible light image and the registered ultraviolet light image using a DB4 wavelet basis function. Preferably, the step S13 further includes: fusing the low-frequency components in the ultraviolet light image and the visible light image using the following formula: PL (x,y)=cvPLv (x,y)+cuPLu (x,y), where PL(x, y) is a low-frequency coefficient of the fused image, PLv(x, y) is a low-frequency coefficient of the visible light image, PLu(x, y) is a low-frequency coefficient of the ultraviolet light image, Cv is a weight of the low-frequency coefficient of the visible light image, and Cu is a weight of the low-frequency coefficient of the ultraviolet light image; the Cv and Cu are obtained by pre-calibration, and Cv<Cu. Preferably, the step S14 further includes: fusing the high-frequency components in the ultraviolet light image