Search

CN-122017822-A - Radiometer and radar fusion detection method based on multi-scale segmentation

CN122017822ACN 122017822 ACN122017822 ACN 122017822ACN-122017822-A

Abstract

The invention provides a radiometer and radar fusion detection method based on multi-scale segmentation, which belongs to the technical field of target detection and comprises the steps of carrying out image registration based on a significant region on SAIR images and SAR images, adopting a BM3D filtering algorithm denoising process introducing Canny edge detection, carrying out multi-scale superpixel segmentation operation on the images, carrying out weighted fusion based on segmentation results and superpixel scale and evidence reliability, and improving D-S fusion rules according to evidence distance and data source reliability and improving fusion stability and target detection accuracy under high conflict conditions based on homologous multi-scale fusion results and image imaging mechanism difference and conflict source analysis results. The invention can fully integrate the high resolution characteristic of the radar with the anti-interference capability of the radiometer, enhance the target characteristic expression, inhibit the background interference, improve the quality of the integrated image and the target detection performance, and has good application value.

Inventors

  • HU FEI
  • ZHU DONG
  • LI WANLI
  • SU JINLONG

Assignees

  • 华中科技大学

Dates

Publication Date
20260512
Application Date
20260108

Claims (8)

  1. 1. A radiometer and radar fusion detection method based on multi-scale segmentation is characterized by comprising the following steps: Image registration based on a significant region is respectively carried out on the SAIR image of the synthetic aperture radiometer and the SAR image of the synthetic aperture radar, and an image pair after spatial alignment is obtained; sequentially performing edge detection and filtering on the registered SAIR image and SAR image to perform denoising processing; Respectively executing multi-scale super-pixel segmentation operation on the denoised image, obtaining image segmentation results under different scales, and carrying out weighted fusion by combining the super-pixel scale and the evidence credibility based on the segmentation results to obtain a homologous multi-scale fusion result; And respectively constructing quality functions corresponding to the SAIR image and the SAR image based on the homologous multi-scale fusion result, calculating conflict factors, and carrying out multi-source image fusion according to the size of the conflict factors and a conflict source analysis result to obtain a final fusion detection result.
  2. 2. The radiometer and radar fusion detection method based on multi-scale segmentation according to claim 1, wherein the synthetic aperture radiometer SAIR image and the synthetic aperture radar SAR image are respectively subjected to image registration based on significant areas, specifically comprising the following steps: based on a gray threshold, a salient region is extracted from the SAIR image and the SAR image respectively, and a binary salient map is generated; carrying out morphological processing on the binary saliency map and extracting a closed region; Calculating the geometric centroid of each closed area as an initial characteristic point; calculating the similarity between all centroid pairs in the SAIR image and the SAR image, and selecting a pair with the highest similarity as an initial candidate matching point; and extracting a local area from the neighborhood of the initial candidate matching point by using a sliding window, vectorizing, measuring the similarity of the area by using a Hamming distance, and finishing the determination of the image registration point.
  3. 3. The radiometer and radar fusion detection method based on multi-scale segmentation of claim 2 wherein the hamming distance is calculated to measure regional similarity using the following formula: Wherein, the Representing the hamming distance between the two region saliency maps, In order to finally match the size of the segmented image, Selecting the matching point with the minimum Hamming distance as the optimal matching point, calculating affine transformation parameters between images based on the matching point with the minimum Hamming distance, and realizing image registration.
  4. 4. A radiometer and radar fusion detection method based on multi-scale segmentation according to claim 3, wherein the edge detection and filtering are sequentially performed on the registered SAIR image and SAR image for denoising, and specifically comprising the steps of: performing Canny edge detection on the image, extracting edge contour information and constructing an edge mask; And performing BM3D filtering on the image block introduced with the edge mask to realize denoising.
  5. 5. The radiometer and radar fusion detection method based on multi-scale segmentation according to claim 4, wherein the performing multi-scale super-pixel segmentation on the denoised images respectively comprises the following steps: And dividing the image under a plurality of preset scales by adopting an SLIC super-pixel dividing algorithm, wherein different scales correspond to different initial super-pixel sizes and numbers.
  6. 6. The radiometer and radar fusion detection method based on multi-scale segmentation according to claim 5, wherein the weighting fusion is performed by combining a super-pixel scale and evidence credibility based on the segmentation result, and specifically comprising the following steps: taking the segmentation result under the multi-scale as different evidences from the same data source, and distributing basic probability assignment BPA constructed based on Sigmoid function to each super-pixel region; calculating the similarity and the support degree between the evidence sources based on Jousselme evidence distances; weighting and normalizing the support according to the number of the super-pixel segmentation blocks corresponding to each evidence source to obtain the credibility of each evidence source; And calculating a correction coefficient of the evidence sources according to the credibility, and carrying out weighted fusion on the evidence sources to obtain the homologous multi-scale fusion result.
  7. 7. The method for detecting fusion of radiometer and radar based on multi-scale segmentation according to claim 6, wherein the method for detecting fusion of multi-source images according to the size of the conflict factor and the analysis result of the conflict source comprises the following steps: calculating a conflict factor K between quality functions corresponding to the SAIR image and the SAR image; When the conflict factor K is smaller than a first threshold, judging that the conflict is low, and directly fusing by adopting a Dempster synthesis rule; When the conflict factor K is larger than or equal to a first threshold value and smaller than a second threshold value, judging a medium conflict situation, calculating Jousselme evidence distance between homologous multi-scale superpixel evidence, removing an evidence source with the largest evidence distance, and then re-synthesizing the rest evidence, if the conflict disappears, adopting a Dempster synthesis rule, if the conflict still exists, weighting the evidence according to the reliability of SAIR and SAR, and then carrying out D-S evidence fusion; When the conflict factor K is larger than or equal to the second threshold value, the high conflict situation is judged, the conflict source is analyzed based on priori knowledge, and the data source information with higher reliability is preferentially adopted.
  8. 8. The radiometer and radar fusion detection method based on multi-scale segmentation of claim 7, wherein in the high collision scenario, when the collision sources are analyzed based on a priori knowledge: If the conflict is caused by the interference of the corner reflector or the sea clutter, the information of the SAIR image is preferentially adopted; If the conflict is caused by the target edge contour difference, the information of the SAR image is preferentially adopted.

Description

Radiometer and radar fusion detection method based on multi-scale segmentation Technical Field The invention belongs to the technical field of target detection, and particularly relates to a radiometer and radar fusion detection method based on multi-scale segmentation. Background Along with the continuous evolution of the detection technology to informatization, the stability and the accuracy of target detection in a complex environment face challenges. The traditional imaging mode relying on a single sensor is easy to have the problems of inaccurate identification, high false alarm rate and the like under interference or severe weather conditions, and is difficult to meet the actual application demands. The radar is used as a common active imaging sensor and has stronger penetrating capacity and all-weather working characteristics, but the radar is imaged by depending on electromagnetic wave reflection, is greatly influenced by target materials, electromagnetic characteristics and the like, is easy to interfere in a complex electromagnetic environment, and has imaging distortion and misjudgment risks. In contrast, the radiometer is passive imaging equipment, receives thermal radiation signals of a target and the environment, does not need to emit electromagnetic waves, and has good concealment and anti-interference capability. In particular, synthetic aperture radiometers (SAIR), can achieve efficient instantaneous imaging through antenna array interferometry, exhibiting superior stability in complex scenarios. However, radiometer imaging resolution is relatively low, and it is difficult to provide clear texture and distance information. The radar and the radiometer have high complementarity in the aspects of imaging mechanism and characteristics, and are fused, so that the resolution and the anti-interference performance are hopefully considered, and the robustness of target detection is improved. At present, effective fusion still has technical difficulties due to the difference of the two types of sensors in data characteristics, resolution and information dimension. Disclosure of Invention Aiming at the defects or improvement demands of the prior art, the invention provides a radiometer and radar fusion detection method based on multi-scale segmentation, which solves the technical problem that the image information of the radar and the radiometer is difficult to effectively fuse. To achieve the above object, according to one aspect of the present invention, there is provided a radiometer and radar fusion detection method based on multi-scale segmentation, comprising the steps of: firstly, performing image registration based on a salient region on a synthetic aperture radiometer SAIR image and a synthetic aperture radar SAR image respectively to obtain a spatially aligned image pair; Secondly, denoising the registered SAIR image and SAR image by adopting a BM3D filtering algorithm introducing Canny edge detection; Respectively executing multi-scale super-pixel segmentation operation on the denoised image to obtain image segmentation results under different scales, and carrying out weighted fusion by combining the super-pixel scale with evidence credibility based on the segmentation results to obtain a homologous multi-scale fusion result; And fourthly, respectively constructing quality functions corresponding to the SAIR image and the SAR image based on the homologous multi-scale fusion result, calculating conflict factors, and carrying out multi-source image fusion by adopting an improved Dempster-Shafer evidence fusion rule according to the size of the conflict factors and a conflict source analysis result to obtain a final fusion detection result. Preferably, the first step specifically includes: Extracting salient regions from a Synthetic Aperture Radar (SAR) image and a synthetic aperture radiometer (SAIR) image based on a gray threshold value to generate a corresponding binary salient map, carrying out morphological processing on the salient map, and extracting the mass center of each closed region as an initial characteristic point, wherein the mass center coordinate expression is as follows: and calculating the similarity between all centroid point pairs in the SAR image and the SAIR image, and selecting the centroid point pair with the highest similarity as an initial candidate matching point. Setting a search window around the candidate matching point, comparing the local region saliency maps in a sliding window mode, flattening image blocks in the window into one-dimensional binary vectors, and calculating the Hamming distance by adopting the following formula to measure the similarity: Wherein, the Representing the hamming distance between the two region saliency maps,In order to finally match the size of the segmented image,Is an exclusive or calculation. And selecting a matching point with the minimum Hamming distance as an optimal matching point, and calculating affine transformation pa