Search

CN-119941972-B - Method and system for generating type countermeasure point cloud complement network based on multi-view projection profile

CN119941972BCN 119941972 BCN119941972 BCN 119941972BCN-119941972-B

Abstract

The invention discloses a method and a system for generating an countermeasure point cloud full network based on multi-view projection profiles, wherein the method comprises the following steps of S1, collecting plant point clouds without a self-shielding structure, S2, carrying out non-rigid transformation on the plant point clouds, obtaining plant point cloud data with self-shielding condition lines, adding labels to plant parts corresponding to classification of the point clouds, S3, extracting multi-view profile projection images of each plant point cloud, obtaining shielded areas and non-shielded areas corresponding to the whole point clouds, S4, inputting the incomplete point clouds into a multi-resolution feature encoder, fusing and establishing feature codes with the missing areas, S5, inputting the feature codes of the incomplete point clouds into a generator of the generated countermeasure network with the point clouds full, predicting the missing areas corresponding to the point clouds, S6, restraining a prediction result by using the multi-view projection profiles of the real missing areas, and then sending the restrained point clouds into a point cloud and a contour identifier of the generated countermeasure network with the point clouds full function, and optimizing network parameters.

Inventors

  • CHEN QINGGUANG
  • LIU JIAJIN
  • ZHANG GUOHAO

Assignees

  • 杭州电子科技大学

Dates

Publication Date
20260508
Application Date
20241120

Claims (7)

  1. 1. The method for generating the antagonism point cloud complement network based on the multi-view projection profile is characterized by comprising the following steps: s1, collecting plant point clouds without a self-shielding structure; S2, performing non-rigid transformation on the plant point cloud to obtain plant point cloud data with self-shielding emotion, and adding labels to plant parts corresponding to classification of the point cloud; s3, extracting multi-view outline projection images of each plant point cloud, and obtaining a shielded area corresponding to the whole point cloud, namely a true missing area point cloud and an unclosed area, namely an incomplete point cloud according to the position relation between the multi-view outline projection images and the virtual image acquisition equipment; s4, inputting the incomplete point cloud into a multi-resolution feature encoder, and fusing and establishing feature codes with missing areas; S5, inputting feature codes of incomplete point clouds into a point cloud complement generation type countermeasure network generator, and predicting missing areas corresponding to the point clouds; S6, constraining a prediction result by using a multi-view projection contour of a real missing area, and sending the constrained point cloud into a point cloud discriminator and a contour discriminator of a point cloud complement generated countermeasure network to perform countermeasure training so as to optimize network parameters; The step S4 specifically includes: S4.1, carrying out downsampling on input incomplete point clouds by utilizing IFPS algorithm, and downsampling the number of points in each point cloud into 2048, 1024 and 512 points, wherein three resolutions from high to low are represented; s4.2, extracting features of point clouds with different resolutions by using CMLP with a plurality of full connection layers; S4.3, cascading and splicing the features obtained by different resolutions, inputting the features into a multi-layer perceptron for extraction, and finally forming 1920-dimensional feature vectors; the step S5 specifically comprises: S5.1, inputting 1920-dimensional feature vectors obtained by multi-resolution feature coding into a generator of a point cloud complement generation type countermeasure network, wherein the generator is used for obtaining feature vectors with dimensions of 1024, 512, 256 and 256 by passing through 4 full-connection linear layers on the input feature vectors; S5.2, respectively passing the 4 feature vectors with different dimensions through different layers in the generator, regarding the feature dimension as 1024 as a layer 1, regarding the feature dimension as 512 as a layer 2, regarding the feature dimension as a first 256 as a layer 3, and regarding the feature dimension as a second 256 as a layer 4; s5.3. Layer 4 and layer 3 output sizes are Is spliced to obtain the point cloud with the size of Point cloud of (a) A low resolution predictor representing a missing portion, wherein: Wherein, the The number of points to generate the point clouds of different resolutions; S5.4 layer 2 output size is To associate it with the point cloud of (2) Splicing to obtain the product with the size of Point cloud of (a) A medium resolution predictor representing a missing portion, wherein: Wherein, the Is the number of points that generate the final point cloud; s5.5 layer 1 output size is To associate it with the point cloud of (2) Splicing to obtain the product with the size of Point cloud of (a) A high resolution predicted value representing the missing portion; The CMLP, generator, and discriminator combine to perform supervised training through a multi-objective loss function, which is as follows: Wherein, the Respectively is The weight of the weight to be occupied is that, CD loss representing the 3 resolution point clouds and the true missing region point clouds of the prediction generation, Representing the binary cross entropy loss of the prediction generated point cloud and the true missing region point cloud, And (3) representing binary cross entropy loss of the point cloud multi-view projection profile generated by prediction and the point cloud multi-view projection profile of the true missing area, wherein the binary cross entropy loss is specifically as follows: Wherein, the Respectively representing the generated high, low and medium resolution point clouds, 、 The weights taken up by the CD losses for the low and medium resolutions respectively, Ground truth values representing the true missing regions, 、 Respectively represents ground truth values obtained by carrying out 1 time IFPS times of downsampling on the ground truth values of the true missing area, Representing the CD loss between two point clouds, , A point cloud discriminator is represented and, The representation of the generator is provided with a representation, Representing an incomplete point cloud belonging to the input, A point cloud representing a truly missing region, Representing the size of the data set and, Binary cross entropy loss of projection profiles of point clouds respectively representing predictively generated point clouds and point clouds of true missing regions according to 3 perspectives of virtual image acquisition device positions, their perspective direction vectors and The coordinate axes are parallel to each other, and specifically: Wherein, the Representing a missing region point cloud of predictive generation, A point cloud representing a truly missing region, Respectively belong to In (c) is a point of the matrix, Representing projection profile discriminator, input Is two-dimensional image data.
  2. 2. The method for generating an opposing point cloud completion network based on multi-view projection profiles as claimed in claim 1, wherein step S1 is specifically as follows: s1.1, fixing a rotary table, image acquisition equipment and plants without complex shielding structures, and ensuring that the plants are in the visible view field range of the acquisition equipment; s1.2, acquiring a transformation relation of adjacent visual angles through a motion recovery structure, and extracting depth information of plants in a depth map by using a mask of an RGB image; S1.3, obtaining complete point cloud data of the plants without self-shielding through multi-view stereo imaging.
  3. 3. The method for generating an opposing point cloud completion network based on multi-view projection profiles as set forth in claim 2, wherein in step S1.3, three-dimensional point cloud data of the current view angle is generated according to the depth image acquisition device internal parameters and the depth image: Wherein, the 、 、 Respectively x, y and z coordinate values of the object in the point cloud coordinate system, The axial and radial focal lengths of the depth image capture device, Is the principal point coordinates of the depth map image, The horizontal and vertical pixel coordinates of the depth map are respectively.
  4. 4. The method for generating an opposing point cloud completion network based on multi-view projection profiles according to any one of claims 1-3, wherein the step S2 specifically comprises: S2.1, aiming at a single plant point cloud without a self-shielding structure, copying, twisting, rotating and pruning the point cloud of part of blades and stems each time to obtain a new complete plant point cloud with the self-shielding structure; S2.2, the complete plant comprises three main parts of stems, stems and leaves, and corresponding tag values are assigned to point clouds of each part; S2.3. Repeating the steps S2.1 and S2.2 for a plurality of times to obtain a required number of complete plant point cloud data sets.
  5. 5. The method for generating an antagonistic point cloud completion network based on multi-view projection profiles according to claim 2 or 3, wherein in step S3, the distance between the image acquisition device and the plant is determined by referring to the position of the image acquisition device and internal reference information when the data is acquired in step S1, the position of the multi-view virtual image acquisition device in a point cloud coordinate system is constructed by taking the distance as a radius, and the point cloud of an occluded area, namely a true missing area, and the point cloud of an unoccluded area, namely an incomplete point cloud, of the same plant are determined according to the view angle of each virtual image acquisition device.
  6. 6. The method for generating an opposing point cloud completion network based on multi-view projection profiles according to any one of claims 1-3, wherein step S6 specifically comprises: s6.1, according to the multi-view projection contour image of the true value of the missing region under the current view angle, taking points, which exceed the projection contour range, in the point cloud predicted by the missing region as points which are generated in error, and setting the values of the points to 0 to carry out constraint generation; And S6.2, sending the constrained predicted point cloud into a point cloud discriminator and a contour discriminator of a point cloud complement generated type countermeasure network to perform countermeasure loss training, and optimizing network parameters.
  7. 7. A system for generating an opposing point cloud completion network based on multi-view projection profiles for performing the method of any of claims 1-6, comprising the following modules: the plant point cloud acquisition module without self-shielding acquires plant point clouds without self-shielding structures; The self-shielding plant point cloud data acquisition module is used for carrying out non-rigid transformation on plant point cloud, acquiring plant point cloud data with self-shielding condition according to the biological structure of plants, and adding labels to plant parts corresponding to classification of the point cloud; The multi-view contour projection image extraction module is used for extracting multi-view contour projection images of each plant point cloud, and obtaining a shielded area corresponding to the complete point cloud, namely a true missing area point cloud and an unclosed area, namely an incomplete point cloud according to the position relation between the multi-view contour projection images and the virtual image acquisition equipment; inputting the incomplete point cloud into a multi-resolution feature encoder, fusing and establishing feature codes with the missing regions; The missing region prediction module inputs feature codes of incomplete point clouds into a generator of a point cloud complement generation type countermeasure network, and predicts missing regions corresponding to the point clouds; And the countermeasure training module is used for restraining the prediction result by using the multi-view projection contour of the real missing area, sending the restrained point cloud into a point cloud discriminator and a contour discriminator of the point cloud complement generated countermeasure network to perform countermeasure training, and optimizing network parameters.

Description

Method and system for generating type countermeasure point cloud complement network based on multi-view projection profile Technical Field The invention belongs to the technical field of three-dimensional phenotype reconstruction of agricultural plants, and particularly relates to a plant point cloud complement method based on multi-view contour constraint of a generated type countermeasure network. Background In recent years, the three-dimensional digitizing technology has important application prospect in the fields of medicine, military aviation, agriculture and the like. In particular, digital agriculture has become an important trend in the development of agriculture in China. The three-dimensional reconstruction of plants can obtain three-dimensional phenotypic characteristics under various practical conditions, including height, width, blade inclination angle, leaf area and canopy volume. These features reflect not only the genetic characteristics of crops, but also the influence of their growth environment and field management. The technology has important application value in the fields of plant breeding, crop growth management, virtual visualization and the like. Traditional three-dimensional phenotype measurement of plants is mainly realized by means of manual measurement, is limited by the precision and manpower consumption of a measuring tool, and is time-consuming and labor-consuming, and the limitation of inaccurate measurement results is accompanied. The plant phenotype information can be quickly, accurately and efficiently obtained by constructing a three-dimensional model of the plant in a digital mode. The method based on the motion restoration structure is a general method for reconstructing a three-dimensional model of a plant, after a plant is fixed at a selected position, a camera is used for collecting color images of the static plant at different visual angles, and structural data of the three-dimensional space of the plant is obtained by extracting characteristic points and characteristic matching of the images and adjusting and optimizing a matching strategy. However, the related technical implementation links such as feature extraction, feature matching, optimization and the like are the reasons for large system operation amount and low reconstruction efficiency of the method. The acquisition of the plant three-dimensional structure depends on depth information, an RGB-D camera is a three-dimensional sensor which combines a color image sensor and a depth image sensor, extracts the depth information of a target point based on a TOF principle, namely a time-of-flight principle, and uses the active projection laser to a measured object, the sensor of the camera receives the laser which is projected to the rear end of the surface of the measured object and diffusely reflects, and the corresponding distance is calculated according to the time difference from the emission to the reception of the laser to be regarded as depth. The technical scheme for acquiring depth information by the active projection laser has the advantages of strong anti-interference capability, high stability, high reconstruction speed and the like, and is widely applied to the fields of target grabbing, three-dimensional navigation, three-dimensional reconstruction and the like. In the actual data acquisition process, an RGB-D camera is fixed, three-dimensional point cloud data under different visual angles are obtained by rotating an object to be detected, and the point cloud data under different visual angles are spliced and fused to obtain a complete three-dimensional point cloud model of the object to be detected. In the three-dimensional reconstruction of plants, as the blades and stems have various distribution, sizes, positions and orientations, reconstruction challenge problems caused by blade diversification are solved by collecting images with certain rotation angle intervals, and the three-dimensional reconstruction method relates to the processing of a large amount of point cloud data and has blindness. The information acquisition under a single visual angle can cause the information deletion of the internal structure of the plant due to the shielding of the internal structure of the plant, and meanwhile, due to reasons such as sensor precision, partial information can be lost in the non-shielded area to a certain extent, so that the information deletion is caused. In order to ensure the accuracy of three-dimensional phenotype measurement of plants, the internal structure of the plants needs to be predicted, complemented and rebuilt to obtain complete and full plant point cloud data, and the rebuilt efficiency and performance are improved. Disclosure of Invention In order to solve the problems in the prior art, the invention provides a method and a system for predicting and completing three-dimensional point clouds of plants based on a multi-view projection profile generation type anti-point cloud com