CN-117649614-B - Linear array laser imaging target identification method based on simulation point cloud data set
Abstract
The invention discloses a linear array laser imaging target identification method based on a simulated point cloud data set, which comprises the steps of creating a motion track of a virtual scanning system by establishing an interactive model of the virtual scanning system and a simulated scene, generating scanning point coordinate data by adopting a ray tracing algorithm, further establishing a parameter noise model of the virtual scanning system, synthesizing noisy point cloud data, accurately marking each point in the simulated point cloud data by establishing a model name-label index table, and finally synthesizing the simulated point cloud data set with abundant data and accurate marking. And finally, taking the simulated point cloud data set as training data of a deep learning target recognition algorithm, completing off-line training and on-line recognition of the algorithm, and realizing accurate recognition of the laser imaging point cloud target.
Inventors
- ZHA BINGTING
- WANG CHENGJUN
- ZHENG ZHEN
- LIU HAODONG
- LI HAOJIE
- ZHANG HE
Assignees
- 南京理工大学
Dates
- Publication Date
- 20260512
- Application Date
- 20230831
Claims (5)
- 1. A linear array laser imaging target identification method based on a simulated point cloud data set is characterized by comprising the following steps: step 1, constructing a simulation scene according to a typical working environment of an unmanned aerial vehicle and target identification requirements of the unmanned aerial vehicle: manufacturing a three-dimensional terrain and a three-dimensional target model of a simulation scene through three-dimensional modeling software, wherein the three-dimensional terrain and the three-dimensional target model of the simulation scene are consistent with typical working terrain and real targets of a real environment, and adding barriers into the simulation scene according to target identification requirements; Step 2, establishing a model name-label correspondence index table for the simulation scene according to labels to be marked, naming the topography, the target model and the obstacle of the simulation scene, wherein the naming format is 'name + sequence number', the names of the same category of targets are partially consistent, and the sequence number represents the number of the category targets corresponding to the same label; Step 3, taking the geometric center of the target model as an origin O t , taking the motion direction of the target model as an O t X t axis, establishing a right-hand three-dimensional rectangular coordinate system O t X t Y t Z t , and defining a coordinate system O t X t Y t Z t at the initial moment as a ground coordinate system, and marking as O w X w Y w Z w ; Step 4, performing Delaunay triangle subdivision on the simulation scene to obtain a series of triangle surface elements, storing the triangle surface elements into an octree structure, wherein each node of the octree represents a cube voxel, each node contains 8 child nodes, the sum of volume elements of the 8 child nodes is equal to the volume of a father node, and each triangle surface element in the simulation scene corresponds to a leaf node; Step 5, adding a virtual scanning system in a simulation scene, setting parameters of the virtual scanning system according to parameters of a laser radar on an unmanned aerial vehicle, including a view angle theta fov , an angle resolution theta s and a detection distance range [ t min ,t max ], creating a time axis for realizing simulation point cloud generation in a dynamic intersection process of the unmanned aerial vehicle and a real target, setting a motion speed v, a scanning frequency f and a motion direction of the virtual scanning system, calculating the position of each time of the virtual scanning system transmitting scanning light, creating a key frame at the position, setting a time starting point as t 0 =0, setting the moment of the ith time of the virtual scanning system transmitting scanning light as t i =v i /f, establishing a track equation of the virtual scanning system for realizing simulation point cloud generation of any flight track, solving a space coordinate (x i ,y i ,z i ) in a ground coordinate system O w X w Y w Z w corresponding to the moment of each time of the virtual scanning system transmitting scanning light in a detection stage, simultaneously calculating the motion speed direction (v xi ,v yi ,v zi ) of the virtual scanning system at the position, and importing the space coordinate (x i ,y i ,z i ) and the motion speed direction of the corresponding position to construct a motion track; Step 6, for each key frame created in step 5, using the space coordinate (x i ,y i ,z i ) of the virtual scanning system corresponding to the key frame as an origin O s ,O s X s axis along the direction of the motion speed, wherein the O s Y s axis is perpendicular to the track plane, and establishing a virtual scanning system coordinate system O s X s Y s Z s ; Step 7, establishing an interaction model with the simulation scene at each key frame according to parameters of the virtual scanning system, and for the virtual scanning system with the view angle of theta fov and the angle resolution of theta s , modeling simulation point generation as a problem of intersection between the scanning light and a triangle surface element of the simulation scene to obtain intersection coordinates of the scanning light and the simulation scene, wherein the number of the corresponding emitted scanning light is n=theta fov /θ s ; Step 8, increasing the reality of the point cloud by adding Gaussian white noise into the simulated point cloud data, namely adding a distance error on an ideal distance value to generate simulated point cloud data containing Gaussian white noise; Step 9, automatically labeling intersection points of the scanning light and the triangular surface elements according to a model name-label correspondence index table of the simulation scene, searching the model name corresponding to the triangular surface elements corresponding to the point if the scanning light returns to the intersection points, and giving labels corresponding to the model name of the point to finish accurate labeling of each intersection point; step 10, generating a plurality of scanning light rays emitted from the O s as an emergent point to the field of view under the virtual scanning system coordinate system O s X s Y s Z s to obtain a contour line of a corresponding keyframe simulation scene, and transforming the coordinates of the contour line points under the ground coordinate system O w X w Y w Z w by coordinate transformation; Step 11, splicing corresponding contour lines of all key frames to obtain simulated scene point cloud data; Step 12, repeating the steps 5-11, creating different motion tracks aiming at virtual scanning systems with different scanning angles and different angle resolutions, simulating at different flight speeds, yaw angles, pitch angles and roll angles, and generating N frames of simulation point cloud data under different intersection conditions to form a simulation point cloud data set; step 13, dividing the simulation point cloud data set into a training set, a verification set and a test set according to a ratio of 6:2:2, inputting the data set into a RandLA-Net model, and completing the pre-training of the RandLA-Net model; and 14, inputting the point cloud data to be identified into a trained RandLA-Net model, and outputting a category prediction result of each point in the point cloud data to be identified, so as to realize offline training and online identification of a target identification algorithm.
- 2. The linear array laser imaging target identification method based on the simulated point cloud data set as claimed in claim 1, wherein in step 7, the interaction model of the virtual scanning system and the simulated scene is specifically as follows: The method comprises the steps of starting from the octree voxels at the shallowest layer, obtaining a voxel list intersecting with a scanning ray, solving sub-voxels intersecting with the scanning ray in each voxel of the voxel list, obtaining a new list of a deeper layer, and finally obtaining an intersecting voxel list of the scanning ray, calculating the intersection point coordinates of triangle surface elements in the intersecting voxel list and the scanning ray, wherein for the intersection point of one scanning ray and a plurality of triangle surface elements, the closest intersection point of the scanning ray and the virtual scanning system is taken as a sampling point of the scanning ray in a target scene in the measuring range [ t min ,t max ] of the virtual scanning system.
- 3. The linear array laser imaging target identification method based on the simulated point cloud data set according to claim 2, wherein the method is characterized by judging whether the scanning ray intersects the voxel or not, specifically comprising the following steps: Assuming that the position coordinate of the virtual scanning system under the ground coordinate system O w X w Y w Z w is O (X o ,y o ,z o ), the outgoing direction of the scanning light is (α, β), where α is the angle between the projection of the scanning light on the plane X s O s Y s and the axis O s X s , β is the angle between the scanning light and the axis O s Z s , assuming that P is a point on the scanning light, the distance between O and P is t, t >0, (X t ,Y t ,Z t ) represents the coordinate of a point on the scanning light that is at a distance t from the point O, the scanning light equation is: Judging whether the scanning ray intersects with the voxel only needs to judge whether any point exists on the scanning ray in the volume space of the voxel, and assuming that the coordinates of the lower left corner and the upper right corner of one voxel are (x min ,y min ,z min ) and (x max ,y max ,z max ) respectively, the condition that the scanning ray intersects with the voxel needs to be satisfied:
- 4. the linear array laser imaging target identification method based on the simulated point cloud data set according to claim 3, wherein the method for calculating the intersection point coordinates of the scanning light and the triangular surface element is as follows: the scan ray has a starting point of O (x o ,y o ,z o ) and a direction of (alpha, beta), and the unit direction vector of the scan ray is expressed as The vertex coordinates of the triangular surface elements are respectively V 1 (x 1 ,y 1 ,z 1 ),V 2 (x 2 ,y 2 ,z 2 ),V 3 (x 3 ,y 3 ,z 3 ); The following equation is further solved: and if the scanning light and the triangular surface element have intersection points, the equation solution simultaneously satisfies the conditions that t is more than or equal to 0, u is more than or equal to 0, v is more than or equal to 0 and u+v is less than or equal to 1, the intersection point coordinates are solved according to the intersection point distance t and the combination of (1), and the coordinates of the intersection points of the scanning light and the triangular surface element are returned for the scanning light with the distance within the detection range.
- 5. The linear array laser imaging target identification method based on the simulated point cloud data set of claim 1, wherein the step 8 is characterized in that the reality of the point cloud is improved by adding Gaussian white noise into the simulated point cloud data, and specifically comprises the following steps: establishing a noise model related to laser radar parameters, and expressing a probability density function P (R) of laser radar distance measurement as Wherein R t represents the true distance value between the laser radar and the target, R represents the measured distance value, δR is the ranging accuracy, and And obtaining the ranging precision delta R of the scanning system by setting the pulse half width and the signal-to-noise ratio of the scanning system, generating a Gaussian distribution distance error value with the mean value of 0 and the standard deviation delta R, and adding a distance error to the ideal distance value to generate simulated point cloud data containing Gaussian white noise.
Description
Linear array laser imaging target identification method based on simulation point cloud data set Technical Field The invention relates to the fields of laser imaging and target detection and identification, in particular to a linear array laser imaging target identification method based on a simulated point cloud data set. Background The laser scanning imaging technology is an active detection technology which takes laser as a transmitting source and can accurately and rapidly acquire three-dimensional space information of a target. The scanning imaging system based on the unmanned aerial vehicle platform is widely applied to the fields of autonomous cruise, intelligent traffic, environment monitoring, military target detection, accurate guidance and the like. Different from the infrared sensor, the passive sensor such as a visible light camera and the like, the array type laser scanning imaging system is adopted to push and sweep the target scene to obtain the three-dimensional information of the target scene, the three-dimensional data can provide the actual size and shape structure of the target, the position and the gesture of the target object can be obtained, and the method is suitable for the fine detection and the accurate identification of the target. Along with the development of the deep learning technology, the point cloud target recognition technology based on the deep learning has important progress, and for a data-driven supervised deep learning algorithm, the size and quality of a data set have important influence on the algorithm effect, and the acquired data set can be used as effective training data only by accurately labeling. However, because the hardware cost of the array laser imaging system is high, the system parameters are not easy to change once being determined, the actual sensing environment and detection conditions are also difficult to determine, and in addition, the manual labeling of experimental data is time-consuming and labor-consuming, so that the cost and difficulty for acquiring the point cloud data of the actual environment by using the laser imaging system through experiments are high. The imaging system is simulated in a computer simulation mode to scan the imaging process, simulation point cloud data are obtained, a simulation point cloud data set is synthesized, and the problem that point cloud experimental data are difficult to obtain can be effectively solved. The current common laser imaging simulation method comprises a method based on photon emission, a method based on laser spot simulation and a method based on ray tracing, wherein the former two methods have large calculated amount and low data generation speed, and the latter method has higher data generation efficiency, but the simulation data does not contain labels, so that additional data labeling work is often required. Disclosure of Invention The invention provides a linear array laser imaging target recognition method based on a simulated point cloud data set, which aims at solving the problems that training data of a point cloud target recognition algorithm based on deep learning is difficult to acquire, the construction cost of the point cloud data set is high and the like, provides a rich and accurately marked point cloud data set for unmanned aerial vehicle target recognition, reduces the cost of data acquisition and marking in the development process of the target recognition algorithm, and realizes linear array laser imaging and target accurate recognition in a complex scene. The technical scheme of the invention is that the linear array laser imaging target identification method based on the simulated point cloud data set is characterized by comprising the following steps: step 1, constructing a simulation scene according to a typical working environment of an unmanned aerial vehicle and target identification requirements of the unmanned aerial vehicle: And (3) manufacturing a three-dimensional terrain and a three-dimensional target model of the simulation scene through three-dimensional modeling software, wherein the three-dimensional terrain and the three-dimensional target model of the simulation scene are consistent with typical working terrain and real targets of a real environment, and adding barriers into the simulation scene according to target identification requirements. And 2, establishing a model name-label correspondence index table for the simulation scene according to labels to be marked, naming the topography, the target model and the obstacle of the simulation scene, wherein the naming format is 'name + sequence number', the names of the same category of targets are partially consistent, and the sequence number represents the number of the category targets corresponding to the same label. And 3, taking the geometric center of the target model as an origin O t, taking the motion direction of the target model as an O tXt axis, establishing a right-hand three-dimensional rectang