Search

CN-121982175-A - Single-photon multi-mode structured light HDR three-dimensional imaging method and system

CN121982175ACN 121982175 ACN121982175 ACN 121982175ACN-121982175-A

Abstract

The invention discloses a single-photon multi-mode structured light HDR three-dimensional imaging method and a system, which belong to the technical field of optical three-dimensional imaging and machine vision and comprise a projector, a common camera, a single-photon camera and a data processing system, wherein the projector is used for projecting a sine fringe pattern with N-step phase shift to a measured object, N is more than or equal to 3, the single-photon camera is used for collecting a binary image cube corresponding to the sine fringe pattern, the common camera is used for synchronously collecting the sine fringe pattern to obtain a clear fringe pattern, the data processing system is used for executing the single-photon multi-mode structured light HDR three-dimensional imaging method, the single-photon sensitivity of the single-photon camera is fused with the high resolution of the common camera under single exposure, the dual limitation of low photons and high dynamic range is broken through, and the nonlinear error and data scarcity problem of the single-photon camera collection fringe is solved through digital twin and migration learning, so that the unified coordinate system HDR reconstruction of a multi-mode structured light system is realized.

Inventors

  • WANG YAJUN
  • YUAN QUAN
  • ZHU SIJIE
  • ZHANG QICAN
  • LIU YUANKUN
  • WU ZHOUJIE
  • SHEN JUNFEI

Assignees

  • 四川大学

Dates

Publication Date
20260505
Application Date
20250917

Claims (10)

  1. 1. The single photon multi-mode structured light HDR three-dimensional imaging method is characterized by comprising the following steps of: s1, controlling a projector to project a sine stripe pattern with N-step phase shift to a measured object, wherein N is more than or equal to 3, acquiring a binary image cube corresponding to the sine stripe pattern by a single-photon camera, and synchronously acquiring the sine stripe pattern by a common camera to obtain a clear stripe pattern; S2, converting the binary image cube into stripes, calculating a corresponding wrapping phase, inputting sin and cos components of the wrapping phase into a trained neural network, and outputting a corrected phase diagram; S3, unifying coordinate systems of the single-photon camera and the common camera, and acquiring mapping relations of corresponding phases and three-dimensional coordinates of the single-photon camera and the common camera under the unified coordinate system; S4, multi-mode three-dimensional data fusion is carried out, invalid phase areas in the corrected phase diagram and the clear fringe diagram are removed, and the HDR three-dimensional coordinate of the measured object under a unified coordinate system is calculated through a phase-three-dimensional coordinate mapping model according to the mapping relation between the phase and the three-dimensional coordinate.
  2. 2. The method according to claim 1, wherein the single photon camera in step S1 continuously captures the binary image cubes corresponding to the sinusoidal fringe pattern in a 1 bit binary mode.
  3. 3. The single photon multi-mode structured light HDR three-dimensional imaging method according to claim 1 is characterized in that step S2 comprises calibrating the single photon camera and the projector, obtaining relative spatial position parameters, generating stripe information under a simulated single photon picture according to the relative spatial position parameters and a simulated three-dimensional model, changing the relative spatial position parameters by adopting a domain random technology, randomly generating a large amount of simulation data, and training a built neural network by utilizing the stripe information under the simulated single photon picture and the simulation data.
  4. 4. A single photon multi-mode structured light HDR three dimensional imaging method in accordance with claim 3 wherein said neural network is a Res-Unet network model.
  5. 5. A single photon multi-mode structured light HDR three dimensional imaging method in accordance with claim 3 wherein calibrating the single photon camera and the projector employs an inverse camera calibration method.
  6. 6. The single photon multi-mode structured light HDR three-dimensional imaging method according to claim 3, wherein the stripe information under the simulated single photon picture is common structured light stripe data, and the stripe information under the simulated single photon picture is converted into single photon stripe images with different bits by adopting a virtual photon image generation method.
  7. 7. The method of single photon multi-mode structured light HDR three dimensional imaging in accordance with claim 1, wherein step S3 comprises: S31, establishing a mapping relation between pixel-level phase and three-dimensional coordinates of the common camera and the projector; S32, establishing sub-pixel grade homonymous point pairs between a single photon camera view field and a common camera view field based on quadrature phase characteristics by using standard plane plates at different depths; S33, obtaining the sub-pixel three-dimensional coordinates of the corresponding homonymous point pairs under the single photon camera view field through the sub-pixel homonymous point pairs and the mapping relation; s34, establishing a single photon phase coordinate mapping lookup table under the single photon camera view angle based on the auxiliary phase coordinate mapping lookup table; S35, constructing a mapping relation of the sub-pixel level three-dimensional coordinates under the single-photon camera view field based on the auxiliary phase coordinate mapping lookup table and the single-photon phase coordinate mapping lookup table; S36, solving a rotation matrix and a translation vector of a coordinate system of the single photon camera and the common camera by utilizing the standard plane plate and a singular value decomposition SVD algorithm; And S37, unifying coordinate systems of the single photon camera and the common camera according to the rotation matrix and the translation vector.
  8. 8. The method of claim 7, wherein step S32 includes using the quadrature phase acquired by the normal camera as a reference and using the quadrature phase acquired by the single photon camera as a target, wherein homonymous points in the field of view of the normal camera are defined as pixel matching losses, and wherein the calculation formula is as follows: ; Wherein, the Representing pixel coordinates of a general camera; representing pixel coordinates of the single photon camera; And The quadrature phase acquired for a normal camera is used as a reference; And The quadrature phase acquired by the single photon camera and used as the target.
  9. 9. The method for three-dimensional imaging of single photon multi-mode structured light HDR as claimed in claim 7 wherein step S34 comprises the steps of obtaining sub-pixel 3D coordinates by the sub-pixel three-dimensional coordinates based on an auxiliary phase coordinate mapping lookup table, correcting errors of the obtained sub-pixel 3D coordinates through plane fitting, and establishing a single photon phase coordinate mapping lookup table.
  10. 10. A multimode structured light system comprising a projector, a normal camera, a single photon camera, and a data processing system communicatively coupled to the normal camera and the single photon camera; The projector is used for projecting a sine stripe pattern with N-step phase shift to the measured object, wherein N is more than or equal to 3; the single photon camera is used for collecting a binary image cube corresponding to the sinusoidal fringe pattern; The data processing system is used for executing the single-photon multi-mode structured light HDR three-dimensional imaging method according to any one of claims 1 to 9.

Description

Single-photon multi-mode structured light HDR three-dimensional imaging method and system Technical Field The invention relates to the technical field of optical three-dimensional imaging and machine vision, in particular to a single-photon multi-mode structured light HDR three-dimensional imaging method and system. Background Structured light three-dimensional imaging (Fringe Projection Profilometry, FPP) has been widely used in the fields of industrial detection, medical diagnosis, cultural relics protection, etc. because of its advantages such as non-contact, high precision, full-field measurement, etc. However, conventional FPP systems are based on CMOS/CCD sensors, whose dynamic range is limited by the full well capacity of the sensor and readout noise, and are challenged in two types of scenarios, (a) low photon flux scenarios, such as black surfaces, etc. Insufficient photon count results in extremely low signal-to-noise ratio (SNR), conventional sensors require extended exposure times or increased projected light intensity, sacrifice real-time and are prone to introducing motion blur, and (b) high dynamic range scenes such as metal and plastic mixing surfaces. The high light reflection area is overexposed and the low light reflection area is underexposed, and multi-exposure fusion or self-adaptive light intensity modulation is needed, so that the measurement efficiency is reduced. The conventional HDR three-dimensional imaging technology mainly depends on a traditional CMOS/CCD sensor, and has obvious defects that a multi-exposure HDR method needs to continuously acquire a plurality of fringe images with different exposure time, the efficiency is low, the multi-exposure HDR method is extremely sensitive to moving objects and is easy to generate motion artifacts, a self-adaptive light intensity modulation technology depends on the reflectivity distribution of a pre-calibrated measured surface, unknown materials are difficult to adapt, meanwhile, the hardware complexity is high, the calibration period is long, the polarization technology can effectively inhibit overexposure of a high-reflection area, but the integral light intensity is obviously attenuated, so that the signal-to-noise ratio of the low-reflection area is rapidly deteriorated, the deep learning method has strong data driving capability, the performance is seriously dependent on large-scale training data, the generalization is poor under extremely low-illumination or high-contrast scenes, and the error is obviously increased. The prior art is limited to a single CMOS/CCD imaging mode for a long time, and is difficult to break through before the double contradiction of insufficient low photon sensitivity and limited high dynamic range. In recent years, a Single Photon Avalanche Diode (SPAD) array becomes a new potential direction by virtue of single photon level sensitivity, but three problems remain, namely firstly, the binary output of the SPAD introduces significant nonlinear response to cause the phase error of stripes, secondly, the process is limited, the resolution of the mainstream SPAD array is only 512 multiplied by 512 and is significantly lower than that of a CMOS, the direct calibration error is larger, and thirdly, the public report that the HDR three-dimensional reconstruction of a system of the dual-mode cooperation of the SPAD and the CMOS is completed under the single exposure condition is not seen at present. Therefore, how to cooperate with the single photon sensitivity of SPAD and the high resolution of CMOS, synchronously overcome the nonlinear error and fuse the HDR information in single exposure, is still a core bottleneck for realizing high-precision three-dimensional reconstruction of complex scenes. Disclosure of Invention The invention aims to overcome the core bottleneck of realizing high-precision three-dimensional reconstruction of a complex scene in the prior art and provides a single-photon multi-mode structured light HDR three-dimensional imaging method and system. In order to solve the technical problems, the invention provides the following technical scheme: In one aspect, a single photon multi-mode structured light HDR three-dimensional imaging method is disclosed, comprising the steps of: s1, controlling a projector to project a sine stripe pattern with N-step phase shift to a measured object, wherein N is more than or equal to 3, acquiring a binary image cube corresponding to the sine stripe pattern by a single-photon camera, and synchronously acquiring the sine stripe pattern by a common camera to obtain a clear stripe pattern; S2, converting the binary image cube into stripes, calculating a corresponding wrapping phase, inputting sin and cos components of the wrapping phase into a trained neural network, and outputting a corrected phase diagram; S3, unifying coordinate systems of the single-photon camera and the common camera, and acquiring mapping relations of corresponding phases and three-dimens