CN-122017863-A - Sensor for intersecting optical paths, processing method, device, equipment and medium
Abstract
The disclosure relates to a sensor for intersecting optical paths, a processing method, a processing device and a processing medium, and relates to the field of optical measurement. The method comprises the steps of obtaining a first image collected by a first image collector and a second image collected by a second image collector, respectively processing the first image and the second image, determining a first light spot stripe corresponding to the first image and a second light spot stripe corresponding to the second image, determining a first sub-pixel in the first image and a second sub-pixel in the second image according to the first light spot stripe and the second light spot stripe, respectively performing coordinate conversion on the first sub-pixel and the second sub-pixel, fusing the first sub-pixel and the second sub-pixel after the coordinate conversion to obtain point cloud data of the surface contour of the target object, and determining relevant parameters of the target object according to the point cloud data. By the method and the device, two paths of images of the target object can be processed rapidly, and accurate parameters can be obtained.
Inventors
- JIANG KEJIN
- LIU HONGCAI
Assignees
- 深矢科技(深圳)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260228
Claims (19)
- 1. The sensor for crossing the light paths is characterized by comprising a first laser, a second laser, a first image collector, a second image collector and a controller; the first laser and the second laser are arranged in parallel, and the first laser and the second laser emit laser in the same direction; The first laser is matched with the first image collector, and the first image collector is used for collecting a first image irradiated on the surface of the target object by the first laser; The second laser is matched with the second image collector, and the second image collector is used for collecting a second image irradiated on the surface of the target object by the second laser; The acquisition visual fields of the first image acquisition device and the second image acquisition device are crossed, and a first included angle between the optical axis of the first laser and the optical axis of the first image acquisition device and a second included angle between the optical axis of the second laser and the optical axis of the second image acquisition device belong to a preset included angle range; The controller is connected with the first laser, the second laser, the first image collector and the second image collector, and is used for controlling the first laser and the second laser to emit laser simultaneously and controlling the first image collector and the second image collector to collect images simultaneously, the first image collector is arranged between the controller and the second laser, and the second image collector is arranged between the controller and the first laser.
- 2. The crossed-light-path sensor of claim 1, wherein the device further comprises a housing for enclosing the first laser, the second laser, the first image collector, the second image collector, and the controller; the shell is provided with a first light outlet hole and a second light outlet hole, the first light outlet hole is positioned at the light outlet position of the first laser, and the second light outlet hole is positioned at the light outlet position of the second laser. .
- 3. An image processing method applied to the crossed light path sensor as claimed in claim 1 or 2, comprising: Acquiring a first image acquired by a first image acquisition unit and a second image acquired by a second image acquisition unit; Processing the first image and the second image respectively, and determining a first light spot stripe corresponding to the first image and a second light spot stripe corresponding to the second image; determining a first sub-pixel in a first image and a second sub-pixel in a second image according to the first light spot stripe and the second light spot stripe; Respectively carrying out coordinate conversion on the first sub-pixel and the second sub-pixel, and fusing the first sub-pixel and the second sub-pixel after the coordinate conversion to obtain point cloud data of the surface profile of the target object; and determining relevant parameters of the target object according to the point cloud data.
- 4. A method according to claim 3, wherein said processing said first image and said second image separately comprises: determining a first gray entropy corresponding to the first image and a second gray entropy corresponding to the second image; determining a joint entropy according to the first gray entropy and the second gray entropy; Determining a target fitting parameter based on the joint entropy and a pre-established corresponding relation, wherein the corresponding relation is used for representing the corresponding relation between the entropy and the fitting parameter; And respectively carrying out image fitting on the first image and the second image according to the target fitting parameters.
- 5. The method of claim 4, wherein the determining a first gray level entropy corresponding to the first image and a second gray level entropy corresponding to the second image comprises: For the first image or the second image, determining the total number of pixels of an effective area in the image and the gray level of each pixel in the effective area; determining a gray level histogram of the image according to the gray level of each pixel; Determining probability distribution of each gray level according to the gray level histogram; and determining the gray entropy corresponding to the image according to the probability of each gray level.
- 6. The method of claim 4, wherein determining a joint entropy from the first gray level entropy and the second gray level entropy comprises: Determining the average value of the first gray entropy and the second gray entropy and the difference weight between the first gray entropy and the second gray entropy; and determining the joint entropy according to the mean value and the difference weight.
- 7. The method of claim 4, wherein the determining the target fitting parameter based on the joint entropy and a pre-established correspondence comprises: Determining initial fitting parameters according to the joint entropy and the pre-established corresponding relation; determining a coincidence region of the first image and the second image; fitting the overlapping areas by using the initial fitting parameters respectively to determine fitting errors; And in response to determining that the fitting error is greater than a preset value, adjusting the initial fitting parameters, and fitting again by using the adjusted fitting parameters until the fitting error is less than or equal to the preset value.
- 8. The method of claim 7, wherein the determining the target fitting parameter based on the joint entropy and a pre-established correspondence comprises: Determining abnormal areas in the first image and the second image; and in response to determining that the fitting error is smaller than or equal to the preset value, respectively adjusting fitting parameters corresponding to the abnormal region to obtain the target fitting parameters.
- 9. The method of claim 8, wherein the determining the abnormal region in the first image and the second image comprises: Determining a gray gradient threshold according to the first image and the second image; For the first image or the second image, sliding window scanning is carried out on the image, and gray values of all windows are determined; And determining an abnormal region in the image according to the gray level value and the gray level gradient threshold value.
- 10. The method of claim 9, wherein the adjusting the fitting parameters corresponding to the abnormal regions to obtain the target fitting parameters includes: determining the abnormality degree of each abnormal region according to the gray level value, the gray level gradient threshold value and a preset range coefficient; And for each abnormal region, adjusting fitting parameters of the abnormal region according to the abnormal degree of the abnormal region.
- 11. The method of claim 10, wherein the adjusting the fitting parameters corresponding to the abnormal regions to obtain the target fitting parameters includes: Comparing the abnormal region in the first image with the abnormal region in the second image, and determining a common abnormal region of the first image and the second image; And keeping the target fitting parameters corresponding to the common abnormal region consistent.
- 12. The method of claim 8, wherein the method further comprises: for each pixel, determining a synergy coefficient of the first image and the second image according to the first gray entropy and the second gray entropy of the pixel; And according to the synergistic coefficient, adjusting fitting parameters corresponding to the pixels of the first image and fitting parameters corresponding to the pixels of the second image to obtain the target fitting parameters.
- 13. The method of claim 8, wherein the method further comprises: Fitting the first image and the second image according to the target fitting parameters, and determining errors of the first image and the second image after fitting; and in response to determining that the error meets a preset condition, dynamically adjusting the target fitting parameter in real time.
- 14. A method according to claim 3, wherein said determining a first subpixel in a first image and a second subpixel in a second image from said first spot stripe and said second spot stripe comprises: Determining a first center stripe of the first spot stripes and a second center stripe of the second spot stripes; Determining the first sub-pixel according to the first center stripe; Determining the second sub-pixel according to the second center stripe; the determining a first center stripe of the first spot stripes and a second center stripe of the second spot stripes comprises: For the first light spot stripe or the second light spot stripe, window scanning is carried out on the light spot stripe, and pixel coordinates and corresponding gray values in each window are determined; and determining the center stripe in the light spot stripes according to the coordinates of each pixel and the corresponding gray value.
- 15. A method according to claim 3, wherein said coordinate transforming the first and second sub-pixels, respectively, comprises: Performing distortion correction on the coordinates of the first sub-pixel and the coordinates of the second sub-pixel to obtain a first ideal pixel coordinate corresponding to the first sub-pixel and a second ideal pixel coordinate corresponding to the second sub-pixel; Converting the first ideal pixel coordinate and the second ideal pixel coordinate from an image coordinate system to a laser plane coordinate system; The fusing the first sub-pixel and the second sub-pixel after coordinate conversion to obtain the point cloud data of the surface profile of the target object comprises the following steps: normalizing coordinates of the first sub-pixel and the second sub-pixel in a world coordinate system; and fusing the normalized first sub-pixel and the normalized second sub-pixel to obtain the point cloud data.
- 16. A method according to claim 3, wherein said determining relevant parameters of said target object from said point cloud data comprises: determining at least one characteristic point and coordinates of the target object according to the point cloud data; and determining relevant parameters of the target object according to the coordinates of each characteristic point.
- 17. An image processing apparatus comprising: An image acquisition unit configured to acquire a first image acquired by the first image acquirer and a second image acquired by the second image acquirer; An image processing unit configured to process the first image and the second image respectively, and determine a first light spot stripe corresponding to the first image and a second light spot stripe corresponding to the second image; A pixel determination unit configured to determine a first subpixel in a first image and a second subpixel in a second image from the first spot stripe and the second spot stripe; the point cloud determining unit is configured to respectively perform coordinate conversion on the first sub-pixel and the second sub-pixel, and fuse the first sub-pixel and the second sub-pixel after the coordinate conversion to obtain point cloud data of the surface profile of the target object; and a parameter determining unit configured to determine relevant parameters of the target object according to the point cloud data.
- 18. An electronic device comprising a memory, a processor, a bus and a computer program stored on the memory and executable on the processor, wherein the processor implements the image processing method of any of claims 3 to 16 when the computer program is executed by the processor.
- 19. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the image processing method according to any one of claims 3 to 16.
Description
Sensor for intersecting optical paths, processing method, device, equipment and medium Technical Field The present application relates to the field of optical measurement, and in particular, to a sensor for intersecting optical paths, and a processing method, apparatus, device, and medium thereof. Background The laser displacement sensor is advanced measuring equipment, the working principle is mainly based on a laser triangulation method, and the accurate measurement and reconstruction of the surface morphology of the measured object can be realized. The laser diode inside the sensor emits a very thin laser beam which impinges on the surface of the object to be measured with a very high linearity and concentration. The laser beam is reflected after striking the object surface, and the angle and direction of the reflected light will vary depending on the characteristics of the object surface and the position of the sensor. The sensor receives the light reflected from the surface of the object through the receiving unit, and accurately calculates the distance between the object and the sensor according to the triangular geometric relationship among the laser emission point, the reflection point and the receiving point of the sensor. The existing laser distance sensor has the problems of complex equipment and complex processing algorithm. Disclosure of Invention The embodiment of the disclosure provides a sensor for intersecting optical paths, a processing method, a device, equipment and a medium. In a first aspect, an embodiment of the disclosure provides a sensor for intersecting an optical path, including a first laser, a second laser, a first image collector, a second image collector, and a controller, where the first laser and the second laser are disposed in parallel and emit laser light in the same direction, the first laser is matched with the first image collector, the first image collector is used for collecting a first image of a surface of a target object irradiated by the first laser, the second laser is matched with the second image collector, the second image collector is used for collecting a second image of the surface of the target object irradiated by the second laser, a collection field of the first image collector and the second image collector intersect, a first included angle between an optical axis of the first laser and an optical axis of the first image collector, and a second included angle between an optical axis of the second laser and an optical axis of the second image collector belong to a preset included angle range, and the controller is connected with the first laser, the second laser, the first image collector and the second image collector and is used for controlling the first laser and the second laser to emit the second laser to the first laser and the second image collector and the second laser to control the first laser collector and the second laser. In a second aspect, an embodiment of the present disclosure provides an image processing method, including acquiring a first image acquired by a first image acquisition unit and a second image acquired by a second image acquisition unit, respectively processing the first image and the second image, determining a first light spot stripe corresponding to the first image and a second light spot stripe corresponding to the second image, determining a first subpixel in the first image and a second subpixel in the second image according to the first light spot stripe and the second light spot stripe, respectively performing coordinate conversion on the first subpixel and the second subpixel, and fusing the first subpixel and the second subpixel after coordinate conversion to obtain point cloud data of a surface profile of a target object, and determining relevant parameters of the target object according to the point cloud data. In a third aspect, an embodiment of the present disclosure provides an image processing apparatus, including an image acquisition unit configured to acquire a first image acquired by a first image acquisition unit and a second image acquired by a second image acquisition unit, an image processing unit configured to process the first image and the second image, respectively, to determine a first light spot stripe corresponding to the first image and a second light spot stripe corresponding to the second image, a pixel determination unit configured to determine a first subpixel in the first image and a second subpixel in the second image according to the first light spot stripe and the second light spot stripe, and a point cloud determination unit configured to coordinate-convert the first subpixel and the second subpixel, respectively, and fuse the first subpixel and the second subpixel after coordinate conversion to obtain point cloud data of a surface profile of a target object, and a parameter determination unit configured to determine a relevant parameter of the target object according to the point c