CN-121977527-A - Target positioning method based on unmanned aerial vehicle inertia and vision combination
Abstract
The invention discloses a target positioning method based on unmanned aerial vehicle inertia and vision combination, which comprises the following steps of firstly collecting an unmanned aerial vehicle inertia navigation original data sequence, collecting ground surface images, extracting features through a SIFT algorithm to generate a vision positioning data sequence, synchronously collecting double-light imaging data, outputting target initial coordinates and an optical flow feature sequence, preprocessing data, adopting Kalman filtering and CNN fusion algorithm to realize multi-source data depth fusion, constructing a dynamic error compensation model based on the optical flow features and attitude angle variation, and finally outputting positioning results and precision indexes after multi-node error correction. The target positioning method based on the combination of the inertia and the vision of the unmanned aerial vehicle has the advantages of high positioning precision, small error increment, light weight, low power consumption and strong environmental adaptability, and can meet the positioning requirements of the unmanned aerial vehicle under complex scenes such as GNSS refusal, bad weather and the like.
Inventors
- WANG QIANG
- WU SHAOFENG
- Zhao Mengkui
- LIANG LIZHENG
- Xiang Yanjie
- WANG MINGZHEN
Assignees
- 鸣飞伟业(武汉)科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260123
Claims (8)
- 1. The target positioning method based on the combination of unmanned aerial vehicle inertia and vision is characterized by comprising the following steps of: S1, acquiring position information, speed information, acceleration information and attitude angle information of an unmanned aerial vehicle, and generating an inertial navigation original data sequence; s2, collecting surface continuous image data, extracting image texture features by adopting a SIFT algorithm, and generating a visual positioning data sequence; s3, synchronously acquiring 4K visible light imaging data and infrared light imaging data of a target area, performing target detection and feature extraction, outputting initial positioning coordinates and optical flow features of a target, and generating a target feature data sequence and an optical flow feature sequence; S4, preprocessing an inertial navigation original data sequence, a visual positioning data sequence, a target feature data sequence and an optical flow feature sequence, and establishing a combined optimization mechanism for matching an inertial navigation error model and visual features by adopting a Kalman filtering and CNN model fusion algorithm to finish depth fusion of multi-source data; S5, constructing a dynamic error compensation model based on the optical flow characteristics and the attitude angle variation of the adjacent moment, and compensating the visual positioning drift error caused by the attitude variation of the unmanned aerial vehicle in real time; s6, performing multi-node error synchronous correction, and outputting a final target positioning result and a positioning precision index.
- 2. The target positioning method based on combination of unmanned aerial vehicle inertia and vision according to claim 1, wherein in S1, position information of the unmanned aerial vehicle is collected in real time through an MEMS inertial navigation unit Speed information Acceleration information And attitude angle information ; Wherein the location information Including longitude Latitude and longitude And elevation of Speed information Including horizontal velocity And vertical velocity Acceleration information Including X-axis acceleration Acceleration in Y axis And Z-axis acceleration Attitude angle information Comprising pitch angle Roll angle And heading angle ; According to the sampling frequency Generating inertial navigation raw data sequences : ; Wherein the method comprises the steps of Is that A set of inertial navigation data at a time, For the total number of samplings.
- 3. The unmanned aerial vehicle inertia and vision combined target positioning method according to claim 1, wherein in S2, ground surface continuous image data are collected through a vision positioning board card, and the preloaded radius is based Satellite map, and SIFT algorithm is adopted to extract image texture features Image texture features Including edge features Gray scale features And corner features ; According to the sampling frequency Generating a visual positioning data sequence : ; Wherein, the Is that A set of visual positioning data for the moment in time, The total number of visual samples.
- 4. The method for positioning a target based on combination of inertia and vision of an unmanned aerial vehicle according to claim 1, wherein in S3, 4K visible light imaging data and 640 x 512 infrared light imaging data of a target area are synchronously acquired through an AI photoelectric pod, target detection and feature extraction are performed, initial positioning coordinates and optical flow features of the target are output, and initial positioning coordinates of the target are obtained , X-axis coordinates, Y-axis coordinates and Z-axis coordinates for initial positioning of the target, respectively, wherein the optical flow characteristics comprise X-axis optical flow Optical flow of Y-axis Optical flow of Z axis ; Generating a target feature data sequence And optical flow feature sequence Target feature data sequence The expression is as follows: ; Wherein, the Is that A set of target feature data for a time of day, Sampling the nacelle a total number of times; Optical flow feature sequence The expression is as follows: ; Wherein, the Is that And calculating an optical flow characteristic set based on the adjacent frame images at the moment.
- 5. The method for positioning a target based on combination of inertia and vision of an unmanned aerial vehicle according to any one of claims 1 to 4, wherein in S4, the preprocessing operation is specifically: Inertial navigation of raw data sequence by using moving average method Zero offset calibration is carried out, and the calibration formula is as follows: ; Wherein, the In order to calibrate the window length, Is that Inertial navigation data after time calibration, Indicating the first time within the running average calibration window Raw inertial navigation data at a moment; Visual positioning data sequence using pinhole camera model And (3) correcting distortion, wherein a correction formula is as follows: ; ; Wherein, the In order to distort the pixel coordinates, In order to correct the post-coordinates, = For the focal length of the camera, 、 As the coordinates of the principal point, Is the target depth; And denoising the double-light imaging data by adopting a Gaussian filtering algorithm.
- 6. The method for positioning a target based on combination of inertia and vision of an unmanned aerial vehicle according to claim 5, wherein in S4, the initialization process of Kalman Kalman filtering is specifically: defining a state vector : ; Initial state vector : ; Wherein, the The initial position, the speed, the acceleration and the attitude angle of the unmanned aerial vehicle at the take-off moment are set; initial covariance matrix : ; Wherein, the 、 、 、 The initial error standard deviation of the position, the speed, the acceleration and the attitude angle; the state equation is: ; The observation equation is: ; Wherein, the For a 4×4 state transition matrix, the expression is as follows: ; Wherein, the Is the sampling interval; for a 4 x 1 system noise driving matrix, the expression is: ; Wherein, the White gaussian noise with an average value of 0; system noise covariance matrix: ; Wherein, the 、 、 、 System process noise standard deviation of position, speed, acceleration and attitude angle; for a2×4 observation matrix, the expression is as follows: ; For Gaussian white noise, the mean value is 0, and the noise covariance matrix is observed: ; Wherein, the 、 The standard deviation of the observation noise is the position and attitude angle.
- 7. The unmanned aerial vehicle inertia and vision combination-based target positioning method according to claim 6, wherein in S4, the network structure of the CNN model is specifically: The input layer is a 128×128×3 feature map corresponding to the image texture feature Initial coordinates with the target Is a fusion data of (1); The first convolution layer Conv1 comprises 32 3 multiplied by 3 convolution kernels, the step length is 1, the filling mode is Same, the activation function selects ReLU, the size of the output characteristic diagram is 128 multiplied by 32, the first pooling layer Pool1 comprises 2 multiplied by 2 maximum pooling kernels, the step length is 2, and the size of the output characteristic diagram is 64 multiplied by 32; The second convolution layer Conv2 comprises 64 3X 3 convolution kernels, the step length is 1, the filling mode is Same, the activation function selects ReLU, the size of the output characteristic diagram is 64X 64, the second pooling layer Pool2 comprises 2X 2 maximum pooling kernels, the step length is 2, and the size of the output characteristic diagram is 32X 64; The third convolution layer Conv3 comprises 128 3X 3 convolution kernels, the step length is 1, the filling mode is Same, the activation function selects ReLU, the size of the output characteristic diagram is 32X 128, the third pooling layer Pool3 comprises 2X 2 maximum pooling kernels, the step length is 2, and the size of the output characteristic diagram is 16X 128; the first full connection layer FC1 has an input neuron number of 16×16×128, an output neuron number of 1024, and an activation function of ReLU; The second full connection layer FC2 has 1024 input neurons and 3 output neurons, and corresponds to the target positioning coordinates ; The loss function is: ; Wherein, the In order to train the number of samples, Is the first The target real positioning coordinates of the individual samples, For predicting coordinates, a back propagation iterative optimization is performed using an Adam optimizer.
- 8. The method for positioning a target based on combination of inertia and vision of an unmanned aerial vehicle according to claim 7, wherein in S5, the specific process of constructing the dynamic error compensation model is as follows: Firstly, calculating the attitude angle change quantity at adjacent time : ; Wherein, the 、 、 ; Building a compensation model: ; Wherein, the For the amount of visual positioning drift error compensation, The visual positioning drift error correction values in the directions of the X axis, the Y axis and the Z axis are respectively used for correcting visual positioning deviation caused by the change of the attitude of the unmanned aerial vehicle; weighting coefficients for optical flow characteristics , The weight coefficients of the optical flow characteristics of the X axis, the Y axis and the Z axis are respectively in the value range of [0,1]; Weight coefficient vector for attitude angle change , The weight coefficient of the pitch angle, roll angle and course angle variation is in the range of [0,1].
Description
Target positioning method based on unmanned aerial vehicle inertia and vision combination Technical Field The invention relates to the technical field of unmanned aerial vehicle navigation positioning, in particular to a target positioning method based on unmanned aerial vehicle inertia and vision combination. Background The development of unmanned aerial vehicle positioning navigation technology is highly dependent on Global Navigation Satellite System (GNSS), which provides real-time location services for unmanned equipment, and has become a core support for various unmanned aerial vehicles to perform tasks. However, GNSS signals face significant vulnerability in practical applications, and are extremely susceptible to electromagnetic interference, signal spoofing or complete failure under complex scenarios such as electronic countermeasure, war conflict, emergency rescue, etc., so that unmanned equipment loses positioning capability, causing serious consequences. For example, partial flights are crashed due to the control system caused by failure of GPS signals, crash accidents are finally caused, military unmanned aerial vehicles are knocked down after encountering electronic interference or decoy under the condition of combined navigation or pure sanitation and guidance, even the combat effectiveness of rocket cannons and other equipment is affected, the cases fully expose the potential risk of over-relying on GNSS signals, and the urgency of developing an autonomous positioning navigation scheme is also highlighted. To solve the positioning problem in the GNSS rejection environment, various alternatives have emerged in the industry, but there are all obvious short plates. In the high-precision inertial navigation technology, although the optical fiber inertial navigation can provide higher positioning precision, the optical fiber inertial navigation is limited by larger volume, weight and high cost, and meanwhile, the power consumption is higher, the optical fiber inertial navigation is difficult to apply in batches in economic unmanned equipment, the MEMS inertial navigation is widely focused by virtue of the advantages of light weight and low cost, but the inherent problem of accumulation of errors along with the flight time and distance exists, and the precision is difficult to guarantee after long-term use. The vision positioning technology realizes autonomous positioning through terrain matching and feature extraction, has strong dependence on environment, has limited applicability in areas lacking obvious feature points such as deserts, oceans, snowlands and the like, often needs high-performance hardware support, and is difficult to realize miniaturized integration. In the existing combined navigation technology, although fusion of inertial navigation, vision and other multi-source data is attempted, the depth and the suitability of a fusion algorithm are insufficient, the complementary advantages of each sensor cannot be fully exerted, an error compensation mechanism is also insufficient, and positioning drift caused by unmanned aerial vehicle gesture change cannot be effectively avoided. Meanwhile, most schemes lack mature multi-node collaborative design, and are difficult to meet the requirements of large-scale application scenes such as unmanned aerial vehicle bee colonies and the like. In addition, the adaptability in complex environments is also a prominent problem, and the conditions of extreme temperature and humidity, bad weather and the like often lead to the performance attenuation or failure of the existing positioning system. Therefore, a positioning method with high precision, strong environmental adaptability, light weight and low cost is needed to break through the bottleneck of the prior art and meet the positioning requirements of unmanned aerial vehicles in various complex scenes in the military and civil fields. Disclosure of Invention The invention aims to provide a target positioning method based on unmanned aerial vehicle inertia and vision combination, which solves the problems of positioning bottleneck in severe environments such as GNSS refusal and the like, accumulation of pure inertial navigation errors and poor adaptability of pure vision, realizes high-precision positioning, greatly reduces errors, achieves light weight, low consumption and low cost, and supports bee colony co-positioning. In order to achieve the above purpose, the invention provides a target positioning method based on unmanned aerial vehicle inertia and vision combination, which comprises the following steps: S1, acquiring position information, speed information, acceleration information and attitude angle information of an unmanned aerial vehicle, and generating an inertial navigation original data sequence; s2, collecting surface continuous image data, extracting image texture features by adopting a SIFT algorithm, and generating a visual positioning data sequence; s3, synchronously acquiring 4