CN-121721653-B - Method for automatically detecting and locking position of hollow target
Abstract
The application relates to the technical field of image measurement, and provides an automatic detection and locking position method for an aerial target, which is suitable for high-precision sensing and stable locking of a low-altitude and near-aerial moving target in a complex environment. The application adopts the thinking of system redundancy and fusion detection, utilizes two sensors of a visible light camera and a laser radar to form two sets of detection sensors with different modes to respectively detect targets, and then carries out fusion processing on data with different modes, thereby realizing automatic detection and tracking positioning of small targets in the air with far field more than 500 meters and providing target positioning information for alignment focusing and real-time tracking display of follow-up accurate detection.
Inventors
- LONG XUEJUN
- JIANG GUANGWEN
- ZHU ZHIWU
- WANG PEIFENG
Assignees
- 山东协和学院
Dates
- Publication Date
- 20260505
- Application Date
- 20260225
Claims (9)
- 1. The method for automatically detecting and locking the empty target is characterized by comprising the following steps of: The heel locking position method comprises the following steps: S100, shooting an aerial target of the same scene by a visible light camera group and a laser radar group respectively, obtaining 2D images of a plurality of groups of cameras by the visible light camera group, and obtaining a second point cloud image by the laser radar group; S200, carrying out remote small target detection and preliminary positioning on the 2D image, collecting target first features in real time, if the target first features do not have preset cooperative identification features, carrying out small target feature detection on the 2D image, otherwise, carrying out cooperative identification point detection on the 2D image, and outputting a first target detection result; the small target feature detection mainly comprises S211-S214; S211, dividing a current frame image of the 2D image into non-overlapped image blocks with fixed sizes based on inverse synthesis search of block matching, wherein each image block is used as a reference block to be matched; according to each reference block, setting a search window with a certain range in the next frame image, calculating the similarity between the reference block and all candidate blocks in the search window, and selecting the candidate block with the highest similarity as a matching block; S212, carrying out multi-scale pyramid reconstruction on the sparse optical flow field, respectively carrying out multi-scale downsampling on the current frame image and the next frame image to form image sequences with different scales; S213, performing variation optimization on the first dense optical flow field, correcting a discontinuous region in the optical flow field by using global smooth constraint and local brightness constant constraint, continuously adjusting the optical flow vector of each pixel point by minimizing an energy function until the energy function converges to a minimum value, obtaining an optimized second dense optical flow field, and capturing a motion track of a small target; S214, if the second dense optical flow field output in S223 has a significant movement track and meets the preset small target feature template matching threshold, judging that a small target exists and outputting a first target detection result, otherwise, judging that no effective target exists, and returning to continue the small target feature detection; S300, when the target is in a long-distance state, treating the small target as a point to perform single-point target visual intersection measurement and positioning, when the target enters a medium-distance range, switching to cooperative identification point detection, and performing target mark point model pose measurement and positioning through multipoint measurement of the target pose; S400, when a target enters a laser radar group monitoring range, performing rapid point cloud target detection based on a grid map, establishing the grid map by carrying out rasterization processing on the second point cloud image, extracting the size and the shape of the target according to the spatial distribution characteristics of the target in the grid map, carrying out target detection and positioning, and outputting a second target detection result; S500, carrying out multi-sensor data fusion detection on the second target detection result and the first target detection result to obtain a third target detection result; S600, locking the target for continuous tracking, and completing automatic lock following measurement of the remote small target.
- 2. The method for automatically detecting and locking a hollow target according to claim 1, wherein: The cooperation mark point detection S220 mainly comprises S221-S224; S221, performing the extraction of a mark point ROI, performing binarization processing on the 2D image, extracting a region with brightness higher than a threshold value as a mark point candidate region, matching the mark point candidate region with a preset mark point template, and screening out the mark point ROI region conforming to the geometric shape and the size characteristics; S222, performing anisotropic diffusion filtering on the mark point ROI region, calculating a gray scale gradient value of 8 neighborhood pixels of each pixel in the mark point ROI region, dynamically adjusting a diffusion coefficient according to the gray scale gradient value, and outputting a mark point filtered image; S223, calculating the gradient amplitude and direction of the marker point filtering image by adopting a gradient operator, screening out edge points with gradient amplitude larger than a gradient threshold according to a preset gradient threshold, connecting adjacent discrete edge points through 8-neighborhood connectivity analysis to form continuous edges, and extracting to obtain a marker point pixel level contour; s224, setting a preset local window by taking each pixel-level edge point in the pixel-level outline of the mark point as the center, calculating the Zernike moment of an image in the local window, extracting the gray distribution characteristics of the edge points, calculating the accurate coordinates of the edge points in the pixels based on the quantitative relation between the Zernike moment and the edge positions to obtain sub-pixel edge point coordinates, carrying out set fitting on all sub-pixel edge points, and calculating the sub-pixel-level outline of the mark point or the sub-pixel-level center coordinates of the mark point; S225, converting the sub-pixel level outline of the mark point or the sub-pixel level center coordinate of the mark point with a preset three-dimensional distribution coordinate system of the mark point space to obtain a real coordinate system of the mark point of the target, reconstructing all real mark point models of the target, and calculating Euclidean distances among mass centers of all the mark points; S226, finding out two mark points with the smallest centroid distance, taking one of the two mark points as an initial starting point and the other one as a second point to form a first segment chain code, and sequentially connecting the other mark points according to a minimum distance principle until the last point and the starting point form a final segment chain code, so as to generate a closed chain code; s227, calculating a primary difference of the closed chain code of the real mark point model to obtain a differential chain code, matching the differential chain code of the real mark point model with the differential chain code of the preset mark point model, if the differential chain codes are the same, judging that the actually measured mark point and the mark point model are successfully matched, outputting a first target detection result, otherwise, replacing an initial starting point, re-executing S226 to reconstruct the chain code until the differential chain code is successfully matched.
- 3. The method for automatically detecting and locking a hollow target according to claim 1, wherein: the single-point target visual intersection measurement positioning comprises the following steps: s311, synchronously imaging targets through each camera in the visible light camera set, and respectively calculating target sub-pixel level center points in each 2D image by using a gray centroid method after the targets are detected in a first target detection result output by each camera in the visible light camera set; s312, according to the target sub-pixel level central point in each 2D image, combining the internal reference matrix and the external reference matrix of each camera to construct a visual intersection equation set, and solving to obtain the single-point target three-dimensional coordinates of the target in the world coordinate system.
- 4. The method for automatically detecting and locking an empty target according to claim 2, wherein: The pose measurement and positioning of the target mark point model comprises the following steps: S321, based on the mark point model successfully matched in S227, carrying out one-to-one correspondence matching on the actually measured mark points of each camera in the visible light camera set, and calculating the mark point coordinates of an actually measured target to obtain an actually measured mark point cloud; S322, carrying out space rigid transformation registration on the actually measured identification point cloud and a preset mark point model, solving a rotation matrix and a translation matrix by utilizing a singular value decomposition method, completing target pose estimation, outputting three-dimensional pose angles and position parameters of a target under a world coordinate system, and completing pose calculation of the target mark point model.
- 5. The method for automatically detecting and locking a hollow target according to claim 1, wherein: The target mark point model pose measurement positioning further comprises a feature point-based tracking acceleration measurement process, when the 2D image is subjected to cooperative mark point detection, global feature point detection is carried out on the 2D image collected by one camera in the visible light camera group, the actually measured mark point is extracted to be matched with the target mark point model, after the mark point corresponding relation successfully matched is determined, the follow-up time sequence image is matched with images of other cameras in the visible light camera group only in the corresponding area, and local feature search and mark point relation matching are carried out.
- 6. The method for automatically detecting and locking a hollow target according to claim 1, wherein: the rapid point cloud target detection based on the grid map comprises the following steps: S401, establishing a grid map, dividing a laser radar detection area into a plurality of uniform three-dimensional grid units, and discretizing point cloud data of the second point cloud image; S402, converting a point cloud coordinate system of the discretized point cloud data into a grid map coordinate system, performing point-by-point traversal, judging grid units to which each point belongs, counting the number and distribution characteristics of point clouds in each grid, and outputting a grid gray map; s403, denoising the grid gray map according to the distribution characteristics and gray information of the point cloud, and matching according to a distribution characteristic template of a grid image of a preset target model, and screening out candidate grid areas where suspected targets are located; s404, carrying out cluster segmentation on the point cloud data in the candidate grid region, extracting a target point cloud cluster, calculating the mass center coordinates of all point clouds in the target point cloud cluster to obtain target position information, finishing target detection and positioning, and outputting a second target detection result.
- 7. The method for automatically detecting and locking a hollow target according to claim 1, wherein: The multi-sensor data fusion detection comprises the following steps: S501, synchronously acquiring a first target detection result output by a visible light camera group and a second target detection result output by a laser radar group; s502, projecting a conversion matrix obtained by calibration of a laser point cloud of a second target detection result to a camera image plane, calculating image pixel coordinates corresponding to each laser point cloud, extracting a mark point sub-pixel level outline and gray scale characteristics in a first target detection result, back projecting the image pixel coordinates to a three-dimensional space of the laser point cloud through an external parameter matrix of a visible light camera group to obtain an initial visual ray, matching intersection points of the laser point cloud and the initial visual ray by taking gray scale similarity and a space distance threshold as constraint conditions, determining a one-to-one correspondence relation between the laser point cloud and image pixels, fusing successfully matched point clouds, and outputting a high-precision target three-dimensional characteristic set; S503, inputting a high-precision target three-dimensional feature set into a space-time consistency filter, carrying out consistency check on matching results of continuous multi-frame images, eliminating mismatching point clouds, and outputting high-density initial visual point clouds covering the complete outline of the target; S504, carrying out point cloud correction on the high-density initial visual point cloud, screening effective seed points in the high-density initial visual point cloud according to the distribution rule of laser depth and visual depth difference values, correcting the effective seed points by utilizing laser point cloud confidence and image point cloud confidence to obtain corrected seed points, calculating normal vectors of the corrected seed point neighborhood, and if the normal vector deviation is larger than a preset deviation value, readjusting weights of the laser point cloud confidence and the image point cloud confidence, and continuing iterative correction to output high-density corrected point cloud; S505, interpolating and complementing the high-density correction point cloud, dividing the high-density correction point cloud into a flat area and an edge area according to the curvature of the target surface, complementing the depth of a non-seed point in the flat area by bilinear interpolation based on the depth of adjacent correction seed points in the flat area, constructing a continuous radial basis function with the center of the correction seed points in the edge area, fitting a continuous curved surface covering the whole edge area, adaptively adjusting the interpolation density according to the curvature change, and completing the high-precision interpolation and complementing of the edge area; And S506, performing space-time joint optimization on the primary fusion point cloud, compensating the dynamic displacement of the target by utilizing the motion consistency constraint of the point cloud between the front frame and the rear frame, if the depth difference value of the point cloud at the same position of the adjacent frame exceeds a preset threshold value, adjusting the interpolation parameters of S505 by taking the corrected seed point depth of the current frame as a reference, re-optimizing the curve fitting result, obtaining the optimized fusion point cloud, and outputting a third target detection result.
- 8. The method for automatically detecting and locking a hollow target according to claim 1, wherein: The lock target continuous tracking S600 includes redundancy measurement fault tolerance processing S610: S611, taking the point cloud coordinates and the depth data in the first target detection result as first redundancy information, taking the point cloud coordinates and the depth data in the second target detection result as second redundancy information, and confirming whether the visible light camera group and the laser radar group complete satellite positioning time service synchronization and three-dimensional coordinate system fusion calibration; S612, performing equipment self-detection on the visible light camera set and the laser radar set, performing self-detection on the frame rate, exposure parameters and interface communication of the cameras, performing self-detection on the echo intensity, the point cloud density and the working voltage of the laser radar, and if any hardware parameter exceeds a standard threshold, marking the redundancy as equipment abnormality; S613, performing signal self-detection on the visible light camera group and the laser radar group, wherein the signal self-detection comprises detection range verification, frame rate time sequence verification and noise verification, and if any signal parameter exceeds a standard threshold, marking the redundancy as signal abnormality; S614, performing dual-redundancy cross consistency verification on the visible light camera group and the laser radar group, wherein the dual-redundancy cross consistency verification comprises space deviation calculation and consistency judgment, wherein the Euclidean distance deviation of the same target is calculated through the first redundancy information and the second redundancy information at the same moment, if the Euclidean distance deviation is smaller than a preset dynamic deviation threshold value adjusted according to the target distance, the space consistency is judged to meet the requirement, otherwise, secondary verification is started, the gray scale characteristics of the mark points of the first redundancy information and the depth profile gradient characteristics of the second redundancy information are extracted for matching, if the characteristic matching degree is larger than a preset matching threshold value, the data drift can be corrected, otherwise, the single redundancy fault is judged; s615, when any redundancy is judged to be abnormal, abnormal signal or single redundancy fault, immediately cutting off data output of the device, and only reserving a data channel with normal redundancy and correctable redundancy as effective redundancy; s616, performing fault-tolerant reconstruction processing on the reserved normal redundancy and correctable redundancy, calculating the confidence weight of each effective redundancy through real-time self-detection passing rate, mutual detection consistency and environment adaptation coefficient, and performing normalization processing on the redundancy confidence weights of the first redundancy information and the second redundancy information of the same time sequence to obtain the confidence of each redundancy; S617, carrying out signal voting according to the states of the first redundancy information and the second redundancy information of the same time sequence, outputting a final third target detection result through weighted fusion when the first redundancy information and the second redundancy information are both effective redundancy, directly outputting corresponding target detection data when only one is effective redundancy, starting an emergency mode if both redundancy are invalid, calling the last historical effective fusion data and combining an inertial navigation estimation result, and outputting a predicted third target detection result.
- 9. The method for automatically detecting and locking a hollow target according to claim 1, wherein: the locked target continuous tracking S600 includes a target confirmation process S620: S621, initiating a target confirmation request, and confirming a target in the third target detection result; s622, after receiving the target confirmation instruction, locking the target, starting a multi-sensor cooperative tracking mode, tracking the target in real time, and completing automatic lock following measurement of the remote small target.
Description
Method for automatically detecting and locking position of hollow target Technical Field The application relates to the technical field of image measurement, and provides an automatic detection and locking position method for an aerial target, which is suitable for high-precision sensing and stable locking of a low-altitude and near-aerial moving target in a complex environment. Background Under the scenes of low-altitude security, near-altitude target guiding and the like, the portable detection equipment for the near-altitude target needs to realize high-precision and high-real-time lock-following measurement of a long-distance small target, for example, the automatic lock-following measurement of the long-distance small target within 300m-500m and larger than 500m is required to be realized when the running-landing unmanned aerial vehicle is guided accurately in taking off and landing, for example, the accurate automatic tracking and locking of the running-landing unmanned aerial vehicle is realized when the running-landing unmanned aerial vehicle flies in the air and the like. However, the conventional detection technology has a plurality of limitations: (1) A single sensor performance short plate. The single visible light camera or the infrared camera can capture the high-density outline of the target, but the far-field depth measurement error is large, for example, the error can be larger than 1.5m at the distance of 100m and larger than 8m at the distance of 500m, the single laser radar has high depth precision, but the empty detection time point cloud is sparse, the target form is difficult to be completely represented, the long-distance small target radar has small reflection area and complex movement characteristics, the single laser radar is easy to be interfered by flying birds, dust and the like, the far-field target pixel occupation ratio is extremely low, and the problems of high omission ratio and the like exist. In addition, the two types of equipment have different data coordinate systems and cannot be directly fused; (2) The traditional processing flow has insufficient real-time performance. When the high-precision camera is used for carrying out target identification detection, the operand of target feature extraction is extremely large, and the real-time requirement of quick tracking and locking is not met; (3) Complex environment and redundancy guarantee are lacking, field dust and weak illumination can cause the degradation of imaging quality of a camera, a traditional system has no redundancy fault tolerance mechanism, detection interruption is caused when a single sensor fails, meanwhile, the portable equipment needs to be light in weight and detection performance, and the traditional heavy detection equipment cannot be adapted to an individual soldier/portable deployment scene. Disclosure of Invention In order to overcome at least one problem or deficiency in the prior art, the application provides an automatic detection and heel locking method for an empty target. The method for automatically detecting and positioning the empty target mainly comprises the steps of S100-S600 and the like, wherein S100 is used for obtaining aerial targets of the same scene through a visible light camera group and a laser radar group, 2D images of a plurality of groups of cameras are obtained through the visible light camera group, second point cloud images are obtained through the laser radar group, S200 is used for conducting remote small target detection and preliminary positioning on the 2D images, first target characteristics are collected in real time, if preset cooperative identification characteristics do not exist in the first target characteristics, small target characteristic detection S210 is conducted on the 2D images, otherwise, cooperative identification point detection S220 is conducted on the 2D images, a first target detection result is output, S300 is used for processing the small targets as one point when the targets are in a remote state, single-point target visual measurement positioning is conducted, when the targets enter a medium-distance range, multi-point measurement target pose is used for conducting target mark model pose measurement positioning, S400 is conducted, when the targets enter the medium-distance range, rapid grid detection is conducted on the targets, the second point cloud image is conducted on the targets, the second target is conducted on the basis of the map, the first target is subjected to grid detection, the first target is conducted, the grid detection result is conducted on the second target is conducted, the target is continuously, the grid detection is conducted, and the grid detection result is obtained, the grid detection is conducted, and the grid detection is achieved through the second grid detection, and the grid detection is conducted, and the grid detection result is achieved, and the grid detection is achieved. On the basis of the above emb