CN-121563975-B - Aggregate grading and needle-shaped detection method for fusing image and point cloud
Abstract
The invention discloses an aggregate grading and needle-shaped detection method for fusing images and point clouds, and relates to the technical field of aggregate grading detection. The aggregate grading method comprises the steps of collecting an image and point cloud data of aggregate through an image data collecting device, combining a Mask R-CNN deep learning model and a sub-pixel segmentation algorithm to achieve sub-pixel segmentation of an aggregate image, calculating a conversion matrix based on the corresponding relation between the image of a calibration block and the point cloud data, converting the aggregate from an image coordinate system to a world coordinate system, segmenting aggregate point clouds based on the aggregate image, fitting the aggregate point clouds by a minimum external cube to obtain geometric size judgment pin sheet-shaped aggregate of the aggregate, and reconstructing a grid model of the aggregate point clouds to calculate aggregate grading. According to the invention, the point cloud segmentation method based on the aggregate image is adopted to realize rapid segmentation of the aggregate point cloud in the compact aggregate state, and the needle-shaped aggregate is judged in a mode of combining the image and the point cloud, so that the detection precision and reliability are improved compared with the method based on the image.
Inventors
- LAN Fuan
- XU TIANBING
- HU GUOTAO
- Kou Chunyang
- QUAN XIAOLIANG
- LIU YONG
- BAI HAO
- BU HE
- ZHOU QIANG
- HU XIAOYUAN
- LI XIN
- LIU KAIWEN
Assignees
- 四川高速公路建设开发集团有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260121
Claims (3)
- 1. The aggregate grading and needle-shaped detection method for fusing the image and the point cloud is characterized by comprising the following steps of: S1, acquiring a first aggregate image and a first aggregate point cloud of aggregate to be detected through an image data acquisition device; S2, segmenting the first aggregate image into a second aggregate image by adopting a Mask R-CNN-based deep learning model to obtain a first edge point; S3, acquiring the image and point cloud data of the calibration block through an image data acquisition device, identifying the corner points of the calibration block in the image and point cloud data of the calibration block, and calculating an optimal conversion matrix from an image coordinate system to a point cloud coordinate system according to the corresponding point pairs of the image and the point cloud of the calibration block; S4, a closed plane obtained by spline curve fitting is adopted for the third edge point, a surrounding body is obtained by stretching the closed plane along the elevation direction, and a second aggregate point cloud is obtained by dividing the first aggregate point cloud according to the surrounding body; S5, reconstructing the grid model of the second aggregate point cloud to obtain the volume of each aggregate, and calculating the grading according to the aggregate volume; The step S2 specifically includes: S21, segmenting the first aggregate image by using a Mask R-CNN deep learning model to obtain a second aggregate image, converting the second aggregate image into a binary image, and detecting by a Canny operator to obtain a pixel-level edge point, namely a first edge point; S22, extracting a color image in the first aggregate image as a third aggregate image according to the second aggregate image, and obtaining a sub-pixel edge point, namely a second edge point, of each aggregate by adopting a Zernike sub-pixel segmentation algorithm on the third aggregate image; the step S3 specifically includes: S31, acquiring a plurality of checkerboard images by using a structured light camera, and calculating a radial distortion coefficient K 1 、k 2 and a tangential distortion coefficient p 1 、p 2 、p 3 of the structured light camera and an internal reference matrix K of the structured light camera by adopting a Zhang Zhengyou calibration method, wherein K is shown as the following formula: wherein f is the focal length, The pixel dimensions in the x-direction and the y-direction respectively, Respectively x and y coordinates of the principal point; s32, placing the calibration block in the range of an image acquisition area where the first aggregate image is located, and acquiring the image and point cloud data of the calibration block by adopting a structured light camera; S33, for the acquired image of the calibration block, acquiring a first angle coordinate of a kth angle point of an ith calibration block by adopting a Harris angle point detection algorithm ; S34, manually dividing the acquired point cloud data of the calibration block into second calibration block point clouds, and fitting the second calibration block point clouds with RANSANC algorithm to obtain a boundary equation of the calibration block Solving intersection points of any two side lines of each calibration block according to the side line equation to obtain point cloud corner coordinates of the kth corner of the ith calibration block ; S35, the first angle point coordinates of all the calibration blocks Distortion correction is carried out to obtain corrected corner coordinates The distortion correction process is that ; Point cloud corner coordinates based on all calibration blocks Converting the point cloud corner coordinates into transformed corner coordinates in the image coordinate system uov by the following coordinate conversion relations : Wherein S is a scale factor, R and T are rotation and translation matrixes respectively, In order to expand the matrix of the internal reference, , The main point coordinates after distortion correction; optimizing the objective function according to the following will minimize the pre-and post-conversion reprojection error As an optimal conversion matrix, the optimization objective function is: Wherein N is the number of point pairs, The value of the error threshold is 1 multiplied by 10 -6 ; Converting the second edge point of the aggregate into a third edge point under a point cloud coordinate system by adopting the optimal conversion matrix; s36, calculating the calculated side length of the calibration block under the condition that the pixel coordinates of the corner points of the calibration block are converted into the point cloud coordinate system And (3) with The maximum difference value is used as a calibration error E, the quantization evaluation of the coordinate system conversion error is carried out, and the calculation is carried out according to the following formula: ; the step S4 specifically comprises the following steps: S41, taking the Z coordinate of the plane where the aggregate laying platform is located as the Z coordinate of the third edge point, so as to obtain fourth edge point coordinates (Xi, yi, zi); s42, fitting a spline curve to a fourth edge point to obtain a closed curve, generating a closed curve by using the area surrounded by the closed curve, stretching the generated closed curve along the Z direction to form an enclosure, and dividing the second aggregate point cloud from the first aggregate point cloud by using the enclosure; S43, combining the third edge point and the second aggregate point cloud into a third aggregate point cloud; S44, fitting the third aggregate point cloud by using a minimum external cube to obtain a long diameter D ee , a short diameter D ee and a thickness h ee of the cube, wherein larger values of the short diameter D ee and the thickness h ee of the aggregate are used as particle sizes, and the expression of the particle sizes is as follows ; S45, the length diameter D ee , the thickness h ee and the number of the components are Judging the needle-shaped aggregate, wherein the judging rule is as follows if The needle-like backbone is identified, if Judging the backbone as a flaky backbone, and judging the other cases as qualified backbones; the step S5 specifically comprises the following steps: S51, enabling pixel points in the closed curve to be turned into point clouds at the bottom of the aggregate through the conversion matrix, and combining the point clouds with the third aggregate to be turned into a complete fourth aggregate point cloud; S52, reconstructing an aggregate grid model by adopting ALPHASHAPE algorithm to the aggregate fourth point cloud, so as to obtain the volume V and the surface area S of the aggregate; s53, calculating an aggregate grading curve according to the following formula according to the aggregate volume: in the formula, The size of the sieve holes is indicated, Indicating that the particle diameter D is smaller than the mesh size The aggregate mass of the aggregate accounts for the percentage of the total mass of the aggregate of a single batch, n represents the aggregate quantity, V i represents the ith aggregate particle diameter D smaller than the sieve pore size Is used for the preparation of the aggregate, V a represents the total volume of the aggregate of a single batch, Representing the density of the aggregate; s54, calculating the proportion of the needle-shaped flaky aggregate according to the following formula according to the aggregate volume: Wherein, PZ is the ratio of needle-shaped flaky aggregate, and V e 、V f is the volume of the needle-shaped and flaky aggregates respectively.
- 2. The aggregate grading and needle-shaped detection method for fusing images and point clouds according to claim 1, wherein the image data acquisition device comprises a structured light camera, a transmission belt, a PLC, a stepping motor and a computer, wherein the structured light camera is arranged right above the transmission belt, an area, which is located right below the structured light camera, on the transmission belt is a fixed image acquisition area, and the structured light camera is used for acquiring point cloud data and RGB image data of the aggregate.
- 3. The method for aggregate grading and needle-like detection of a fused image and point cloud according to claim 1, wherein the step S4 comprises: calculating centroid coordinates of a third edge point in the point cloud coordinate system Wherein , X 2 and y 2 are the x-coordinate and y-coordinate, respectively, of the third edge point; Dividing the first aggregate point cloud into each aggregate point cloud by adopting a pointnet method to obtain a second aggregate point cloud, and calculating the centroid coordinates of the second aggregate point cloud as Wherein , , X 3i 、y 3i and z 3i are the x, y and z coordinates of the second aggregate point cloud, respectively; Judging whether the distance between the centroid coordinates of the third edge point and the centroid coordinates of the second aggregate point cloud is smaller than an error value or not by the following formula Less than the error value The aggregate edge point cloud and the second aggregate point cloud are the same aggregate, namely the edge point and the point cloud of one aggregate are obtained, and the formula specifically shows that: in the formula, Is that And The distance between the two plates is set to be equal, The error value between centroid points is 0.001m; Obtaining a long axis L e and a short axis S e of the aggregate by adopting a third edge point under the minimum circumscribed rectangular fitting point cloud coordinate system, and according to the second aggregate point cloud The thickness H e of the aggregate is obtained, so that the particle size of the aggregate is ; Pin sheet aggregate is judged when When the aggregate is more than 2.4, the aggregate is acicular aggregate, when When the aggregate is less than 0.4, the aggregate is a flaky aggregate.
Description
Aggregate grading and needle-shaped detection method for fusing image and point cloud Technical Field The invention relates to the technical field of aggregate grading detection, in particular to a method for detecting aggregate grading and needle-shaped pieces by fusing images and point clouds. Background Cement concrete and asphalt concrete are used as main materials for building structures and pavement, and their properties determine durability and reliability of the structure. Aggregate forms the framework of cement concrete and asphalt concrete, and the grading and shape characteristics of the aggregate have important influences on the working performance and the bearing capacity of the aggregate. The traditional aggregate grading detection method mainly adopts a screening method, and has the characteristics of low efficiency and labor intensity although the method has higher detection precision in the earlier stage. In addition, the screen holes are worn out in long-term use, resulting in a decrease in detection accuracy. The shape characteristics of the aggregate are mainly pointer-shaped aggregate, and the pointer-shaped aggregate can seriously influence the fluidity of the cement concrete and the compaction performance of the asphalt concrete. The conventional needle-shaped aggregate detection method is characterized in that a detector measures the length, width and thickness of the aggregate by using a vernier caliper, so that the needle-shaped aggregate is judged, and the method also has the characteristic of low efficiency. Therefore, rapid and high-precision detection of aggregate grading and shape characteristics is a highly desirable problem. Disclosure of Invention The invention aims to overcome the defects of the prior art and provide the aggregate grading and needle-shaped aggregate detection method for fusing the image and the point cloud, so that the aggregate grading and needle-shaped aggregate can be rapidly and accurately detected. The aim of the invention is realized by the following technical scheme: A aggregate grading and needle-shaped detection method for fusing an image and point cloud comprises the following steps: S1, acquiring a first aggregate image and a first aggregate point cloud of aggregate to be detected through an image data acquisition device; S2, segmenting the first aggregate image into a second aggregate image by adopting a Mask R-CNN-based deep learning model to obtain a first edge point; S3, acquiring the image and point cloud data of the calibration block through an image data acquisition device, identifying the corner points of the calibration block in the image and point cloud data of the calibration block, and calculating an optimal conversion matrix from an image coordinate system to a point cloud coordinate system according to the corresponding point pairs of the image and the point cloud of the calibration block; S4, a closed plane obtained by spline curve fitting is adopted for the third edge point, a surrounding body is obtained by stretching the closed plane along the elevation direction, and a second aggregate point cloud is obtained by dividing the first aggregate point cloud according to the surrounding body; S5, reconstructing the grid model of the second aggregate point cloud to obtain the volume of each aggregate, and calculating the grading according to the aggregate volume. Further, the image data acquisition device comprises a structured light camera, a transmission belt, a PLC, a stepping motor and a computer, wherein the structured light camera is arranged right above the transmission belt, an area, which is positioned right below the structured light camera, on the transmission belt is a fixed image acquisition area, and the structured light camera is used for acquiring point cloud data and RGB image data of aggregate. Further, the step S2 specifically includes: S21, segmenting the first aggregate image by using a Mask R-CNN deep learning model to obtain a second aggregate image, converting the second aggregate image into a binary image, and detecting by a Canny operator to obtain a pixel-level edge point, namely a first edge point; S22, extracting a color image in the first aggregate image to be a third aggregate image according to the second aggregate image, and obtaining a sub-pixel edge point, namely a second edge point, of each aggregate by adopting a Zernike sub-pixel segmentation algorithm on the third aggregate image. Further, the step S3 specifically includes: S31, acquiring a plurality of checkerboard images by using a structured light camera, and calculating a radial distortion coefficient K 1、k2 and a tangential distortion coefficient p 1、p2、p3 of the structured light camera and an internal reference matrix K of the structured light camera by adopting a Zhang Zhengyou calibration method, wherein K is shown as the following formula: wherein f is the focal length, The pixel dimensions in the x-direction and the y-direction respect