CN-122023846-A - Heterogeneous template matching method for visual angle correction of image guidance head
Abstract
The invention discloses a heterogeneous template matching method for visual angle correction of an image seeker, which utilizes inertial navigation data to correct a pre-stored template in real time, effectively compensates visual angle distortion, rotation and scale transformation caused by flight attitude and altitude variation, breaks through the limitation of large visual angle difference, adjusts matching parameters and network weights in real time through image difference feedback and matching confidence monitoring, realizes self-adaptive optimization in complex dynamic environment, designs a lightweight and efficient characteristic network, remarkably improves robustness to heterogeneous images and oversized visual angle/rotation transformation, and particularly solves the bottleneck problems of characteristic degradation and matching failure under large rotation angle of the existing method, and has stronger engineering application value.
Inventors
- ZHANG LINGLING
- LI LEPING
- LIU ZHEN
- WANG SHIKAI
- HUANG PENG
- YANG WEIPING
- LU XIN
- ZHOU WEI
- LIU JIEYING
- ZHOU WENJIE
- YANG YONGFU
- YANG YONGDA
Assignees
- 湖南华南光电(集团)有限责任公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260130
Claims (8)
- 1. The heterogeneous template matching method for correcting the visual angle of the image guidance head is characterized by comprising the following specific implementation steps: S1, acquiring data and preparing a template, namely acquiring an aerial image of a target area under the condition that the unmanned aerial vehicle keeps a vertical or approximately vertical posture to prepare the template, and synchronously recording shooting heights; s2, parameter calibration, namely performing internal parameter calibration on an infrared imaging system of the image seeker by adopting a checkerboard calibration method; S3, loading the template, namely loading the template image and corresponding shooting height data; s4, acquiring pose data, namely acquiring carrier pose angle and flight height data provided by inertial navigation equipment in real time; s5, template dynamic correction, namely, fusing multi-source data such as real-time attitude angle, flight height and imaging system internal parameters provided by inertial navigation, and re-projecting a pre-stored template image to the current seeker imaging view angle through a template correction algorithm to generate a corrected template image; S6, correction evaluation, namely calculating the image similarity of the real-time image and the corrected template image, and triggering re-correction if the image similarity is higher than a threshold value; S7, feature extraction, namely performing key point detection and feature descriptor calculation on the corrected template diagram and the corrected real-time diagram by using a lightweight feature point extraction network; S8, performing feature matching and positioning, namely performing intensive matching and calculation on the extracted feature points, and outputting predicted position information of the target in the real-time graph; S9, an error feedback mechanism comprises that the system calculates a normalized cross-correlation peak value of each frame in real time as a matching confidence coefficient, judges that the matching reliability is insufficient when the continuous 5 frames of the peak value are lower than a threshold value of 0.8, and automatically triggers a parameter optimization flow, otherwise, outputs the position information of a target in a real-time graph; s10, offline optimization, namely in a model training stage, performing targeted optimization on the attention module of the feature network by using successful matching data of the task scene.
- 2. The method for heterologous template matching for image guidance head perspective correction of claim 1, wherein the imaging system reference data comprises a focal length And pixel size 、 。
- 3. The method of image-guided-head perspective correction of claim 2, wherein the attitude angle comprises a pitch angle Yaw angle Roll angle 。
- 4. The method for matching a heterogeneous template for image guidance head view correction according to claim 1, wherein the method for correcting the pre-stored template map in step S5 is as follows: (1) Optical axis pitch deformation correction According to pitch angle And yaw angle Calculating the projection included angle between the optical axis and the ground Establishing a pixel mapping relation between a reference image coordinate system and a current image coordinate system, and eliminating stretching deformation caused by pitching; Setting up 、 The imaging system is respectively in a reference position and a current position coordinate system, Is the coordinate system of the image and, Is the optical axis and is the optical axis, Is any point on the ground, the current image point Corresponding to the reference image Then Is provided with Coordinates of the reference position coordinate system: Then point Coordinates in the current position coordinate system: Thereby, the object point Corresponding image point The coordinates in the current imaging coordinate system are: Object point Corresponding image point The coordinates in the current image coordinate system are: (2) Rotational deformation correction Incorporating yaw angle And roll angle Performing an inverse rotational transformation on the image about the optical axis, correcting rotational deformations caused by the steering of the aircraft; For points in a reference image coordinate system Clockwise rotation of the total angle about the optical axis Obtaining corrected points Then: (3) Height scaling correction Based on the height ratio h/h 0 , performing scaling compensation on the image to offset the scale difference caused by the height change; Assuming image points The corresponding correction image point after the height change is Then: (4) Weighting adjacent pixel values by a hyperbolic function, the formula is: Wherein, the As a smoothing factor, the sawtooth effect caused by quantization errors is suppressed; (5) Comprehensive mapping and interpolation optimization Optimizing pixel coordinate conversion by hyperbola interpolation algorithm, reducing information loss and improving calculation efficiency, and establishing arbitrary pixel coordinates under current imaging conditions To the corresponding coordinates under the condition of reference imaging Is a bi-directional mapping relationship of: 。
- 5. the method for matching heterogeneous templates for correcting the view angle of an image guidance head according to claim 1, wherein in step S6, the image similarity is calculated by calculating a gray error mean value of an infrared real-time image acquired by the guidance head and a corrected template image: Wherein, the To correct the first template in the post-template diagram The gray value of the individual pixels is used, Is the first in the real-time graph The gray value of the individual pixels is used, For the total number of image pixels, if And triggering inertial navigation data recalibration or interpolation parameter optimization, and recalibrating the template diagram.
- 6. The method for matching heterogeneous templates for image seeker view correction according to claim 1, wherein in step S7, the lightweight characteristic point extraction network is composed of 6 basic convolution blocks, and the channel sequence is The spatial resolution is halved step by step 。
- 7. The method for matching a heterogeneous template for correcting an image seeker view angle according to claim 6, wherein the implementation method of the keypoint detection and feature descriptor calculation in step S7 is as follows: (1) Descriptor generation branch first incorporating feature pyramids into a feature pyramid encoder The feature graphs of three scales are respectively input into a channel attention module SE, and then the scales are as follows And (3) with Respectively up-sample to the feature map of (a) Aligning the dimension of the three feature graphs with the dimension of the first scale feature graph, and summing the three feature graphs element by element to realize multi-scale feature fusion; finally, the convolution fusion block formed by the three basic layers is further integrated and represented to output a final characteristic description diagram Adding independent convolution block regression reliability heat map Quantization of local features Confidence probabilities can be matched; (2) The key point detection branch adopts independent parallel architecture to divide the image into And (3) grid, wherein each grid unit is converted into a 64-dimensional vector, and the coordinates of the sub-pixel key points are directly predicted through four layers of 1X 1 convolution.
- 8. The method of image seeker view correction for heterogeneous template matching according to claim 1, wherein the method of dense matching in step S8 is as follows: Feature pairs obtained for coarse matching Processing the spliced characteristic by using a multi-layer sensor, and predicting the pixel level offset For offset amount Classifying to obtain correct pixel level matching under the resolution of the original image: Wherein, the A probability distribution over the space of possible offsets is characterized.
Description
Heterogeneous template matching method for visual angle correction of image guidance head Technical Field The invention relates to a heterogeneous template matching method for visual angle correction of an image guidance head, and belongs to the field of accurate guidance. Background During the aircraft end guidance phase, the image guidance head is a key component to achieve accurate target identification. The launching pad typically releases the load over a long distance, relying on inertial guidance until it enters the target near-empty region, and capturing real-time images by the introducer for end-matched guidance. Because the transmitting direction is uncertain, and the change of the gesture and the position is obvious in the flying process, nonlinear transformation such as rotation, scale and visual angle difference exists between the infrared real-time image acquired by the seeker and the pre-stored visible light template image, and the imaging condition difference is often accompanied. The template matching is used as a core link of the accurate guidance of the image guidance head, and the task of the template matching is to determine the target position by calculating the similarity between the infrared real-time image and the visible light template image. However, the existing mainstream template matching algorithm can maintain certain performance in the face of limited visual angle change (for example, the rotation angle is smaller than 30 degrees), but under the scene of large visual angle difference, the geometric invariance of the feature descriptors is obviously degraded, the complex distortion is difficult to overcome effectively, and the matching precision is reduced sharply and even fails. The limitation of the existing method severely restricts the reliability and guidance precision of the image seeker in a high dynamic end environment, and development of a heterologous template matching technology with strong visual angle robustness is needed. Disclosure of Invention The invention aims to overcome the defect of insufficient adaptability of the existing template matching technology under the conditions of large visual angle difference and heterogeneous imaging, and provides a heterogeneous template matching method for correcting the visual angle of an image guide head, which effectively improves the target identification and positioning precision of the image guide head under a high dynamic environment. In order to achieve the purpose, the invention adopts the following technical scheme that the method for matching the heterogeneous template for correcting the visual angle of the image seeker comprises the following specific implementation steps: S1, data acquisition and template preparation, namely acquiring aerial images of a target area under the condition that the unmanned aerial vehicle keeps a vertical or approximately vertical posture to prepare a template, and synchronously recording shooting heights ; S2, parameter calibration, namely performing internal parameter calibration on an infrared imaging system of the image seeker by adopting a checkerboard calibration method; s3, loading the template, namely loading the template image and corresponding shooting height data thereof ; S4, acquiring pose data, namely acquiring a carrier pose angle and a flight height provided by the inertial navigation device in real time; S5, template dynamic correction, namely, fusing multi-source data such as real-time attitude angle, height and imaging system internal parameters provided by inertial navigation, and re-projecting a pre-stored template image to the current seeker imaging view angle through a template correction algorithm to generate a corrected template image; S6, correction evaluation, namely calculating the image similarity of the real-time image and the corrected template image, and triggering re-correction if the image similarity is higher than a threshold value; S7, feature extraction, namely performing key point detection and feature descriptor calculation on the corrected template diagram and the corrected real-time diagram by using a lightweight feature point extraction network; S8, performing feature matching and positioning, namely performing intensive matching and calculation on the extracted feature points, and outputting predicted position information of the target in the real-time graph; S9, an error feedback mechanism comprises that the system calculates a normalized cross-correlation peak value of each frame in real time as a matching confidence coefficient, judges that the matching reliability is insufficient when the continuous 5 frames of the peak value are lower than a threshold value of 0.8, and automatically triggers a parameter optimization flow, otherwise, outputs the position information of a target in a real-time graph; S10, offline optimization, namely in a model training stage, performing targeted optimization on the attention module of the feature network