EP-4742155-A2 - THREE-DIMENSIONAL RECONSTRUCTION METHOD AND APPARATUS, AND DEVICE
Abstract
Embodiments of the present disclosure provide a three-dimensional reconstruction method. Mark points are set on a surface of a target object. Then, multiple frames of mark point three-dimensional point cloud are reconstructed based on image frames containing a mark point pattern, and multiple frames of target object three-dimensional point cloud are reconstructed based on image frames containing a structured light pattern. As a coordinate transformation relationship between the multiple frames of the three-dimensional point cloud can be accurately determined, and a coordinate transformation relationship between a single frame of the mark point three-dimensional point cloud and a single frame of the target object three-dimensional point cloud can also be determined, the mark points can further be used to assist in stitching the multiple frames of the target object three-dimensional point cloud, to obtain a more accurate stitching result.
Inventors
- CHEN, HAN
- ZHAO, Xiaobo
- CHEN, XIAOJUN
- ZHANG, JIAN
- HUANG, Leijie
Assignees
- Shining 3D Tech Co., Ltd.
Dates
- Publication Date
- 20260513
- Application Date
- 20260302
Claims (15)
- A three-dimensional reconstruction method, comprising: acquiring an image frame set collected by a three-dimensional scanning device during a scanning process of a target object, wherein a surface of the target object is provided with mark points, and structured light is projected onto the surface of the target object during at least part of the scanning process of the target object; performing three-dimensional reconstruction of the mark points based on image frames in the image frame set that comprise a mark point pattern, to obtain multiple frames of mark point three-dimensional point cloud, and determining a first coordinate transformation relationship between the multiple frames of the mark point three-dimensional point cloud; performing three-dimensional reconstruction of the target object based on image frames in the image frame set that comprise a structured light pattern, to obtain multiple frames of target object three-dimensional point cloud; and for each frame of the target object three-dimensional point cloud, determining a second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and one frame of the mark point three-dimensional point cloud, and stitching the multiple frames of the target object three-dimensional point cloud based on the first coordinate transformation relationship and the second coordinate transformation relationship.
- The method according to claim 1, wherein the image frame set comprises structured light image frames and mark point image frames, wherein each of the structured light image frames only comprises the structured light pattern, and each of the mark point image frames only comprises the mark point pattern; or the image frame set comprises composite image frames, wherein each of the composite image frames comprises both the structured light pattern and the mark point pattern.
- The method according to claim 2, wherein a density of the structured light pattern in the composite image frames is determined based on a size of the mark points.
- The method according to claim 2, wherein when a density of the structured light pattern projected by the three-dimensional scanning device is less than a preset density, the image frame set comprises the composite image frames; and when the density of the structured light pattern projected by the three-dimensional scanning device is greater than or equal to the preset density, the image frame set comprises the structured light image frames and the mark point image frames.
- The method according to claim 2, wherein the three-dimensional scanning device comprises a structured light projector for projecting the structured light onto the target object, and a fill light for supplementing light for the target object; the composite image frames are acquired when the structured light projector is in an ON state and the fill light is in an ON state.
- The method according to claim 2, wherein the image frame set comprises at least one first image frame sequence, wherein each of the at least one first image frame sequence is acquired by one camera in the three-dimensional scanning device, and comprises the structured light image frames and the mark point image frames, and the structured light image frames and the mark point image frames are acquired alternately by the camera.
- The method according to claim 6, wherein the three-dimensional scanning device comprises a structured light projector for projecting the structured light onto the target object, and a fill light for supplementing light for the target object; wherein the structured light image frames are acquired when the structured light projector is in an ON state and the fill light is in an OFF state, the mark point image frames are acquired when the structured light projector is in an OFF state and the fill light is in an ON state.
- The method according to claim 6, wherein the three-dimensional scanning device comprises a structured light projector, wherein the structured light projector comprises a structured light mode for projecting structured light, and a uniform light mode for projecting uniform light; wherein the structured light image frames are acquired when the structured light projector is in the structured light mode, and the mark point image frames are acquired when the structured light projector is in the uniform light mode.
- The method according to claim 6, wherein the structured light image frames and the mark point image frames in the first image frame sequence are interleaved.
- The method according to claim 6, wherein the first image frame sequence comprises repeated image groups, each of the image groups comprises at least one of the structured light image frames and at least one of the mark point image frames, and comprises at least two consecutive image frames of a same type; wherein each of the image groups comprises two consecutive structured light image frames and one mark point image frame; or each of the image groups comprises two consecutive mark point image frames and one structured light image frame.
- The method according to claim 6, wherein one of the mark point image frames corresponding to one frame of the mark point three-dimensional point cloud is adjacent to one of the structured light image frames corresponding to one frame of the target object three-dimensional point cloud; for each frame of the target object three-dimensional point cloud, determining the second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and one frame of the mark point three-dimensional point cloud comprises: performing motion estimation on at least two adjacent image frames of the same type in the first image frame sequence, to determine a pose transformation of the camera when the camera acquires the at least two adjacent image frames, wherein a collection time interval between the at least two adjacent image frames of the same type and the structured light image frame corresponding to the frame of the target object three-dimensional point cloud is less than a preset time interval; and determining the second coordinate transformation relationship based on the pose transformation.
- The method according to claim 2, wherein the image frame set comprises a second image frame sequence and a third image frame sequence, wherein the second image frame sequence comprises the structured light image frames, and the third image frame sequence comprises the mark point image frames, wherein the second image frame sequence and the third image frame sequence are respectively acquired by two cameras in the three-dimensional scanning device; wherein a collection time of one frame of the mark point three-dimensional point cloud is same as a collection time of one frame of the target object three-dimensional point cloud; for each frame of the target object three-dimensional point cloud, determining the second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and one frame of the mark point three-dimensional point cloud comprises: determining the second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and the one frame of the mark point three-dimensional point cloud based on pre-calibrated extrinsic parameters of the two cameras.
- The method according to claim 1, wherein the three-dimensional scanning device is an oral scanner, the target object is an oral cavity, the oral cavity is installed with a scan body, or the mark points are pasted on teeth/gums of the oral cavity, or the oral cavity is installed with a target; wherein after stitching the multiple frames of the target object three-dimensional point cloud based on the first coordinate transformation relationship and the second coordinate transformation relationship, the method further comprises: displaying a three-dimensional model of the target object obtained by stitching in real-time on an interactive interface.
- The method according to claim 1, wherein the frame of the target object three-dimensional point cloud and the one frame of the mark point three-dimensional point cloud satisfy one of: that the frame of the target object three-dimensional point cloud and the one frame of the mark point three-dimensional point cloud are reconstructed based on one image frame; that an image frame to reconstruct the one frame of the mark point three-dimensional point cloud and an image frame to reconstruct the frame of the target object three-dimensional point cloud are collected by different cameras at a same moment; or that a time interval between a collection time of an image frame to reconstruct the one frame of the mark point three-dimensional point cloud and a collection time of an image frame to reconstruct the frame of the target object three-dimensional point cloud is less than a preset threshold.
- An electronic device, comprising one or more processors, a memory, and computer instructions stored in the memory and executable by the one or more processors, wherein the one or more processors, when executing the computer instructions, is caused to perform the method according to any one of claims 1-14.
Description
TECHNICAL FIELD Embodiments of the present application relate to the field of three-dimensional scanning technology, and particularly to a three-dimensional reconstruction method and apparatus, a device, and a storage medium. BACKGROUND In the process of performing three-dimensional reconstruction of a target object using data collected by a three-dimensional scanning device, it is usually necessary to use the three-dimensional scanning device to scan the target object multiple times from different perspectives to obtain multi-frame partial point cloud data, and then stitch the multi-frame partial point cloud data to obtain global point cloud data of the target object. In related technologies, a common stitching manner is geometric stitching, which utilizes the geometric features of the target object itself for stitching. This stitching manner requires the target object to have rich and non-repetitive geometric features. When scanning some geometrically regular target objects, for example, target objects with repetitive geometric features or weak feature information such as cylindrical abutments and implant posts, due to the limited number of feature points extracted, high-precision stitching results cannot be calculated, resulting in poor accuracy of the finally reconstructed three-dimensional model. SUMMARY Embodiments of the present application provide a three-dimensional reconstruction method and apparatus, a device, and a storage medium. According to a first aspect of embodiments of the present application, a three-dimensional reconstruction method is provided, the method includes: acquiring an image frame set collected by a three-dimensional scanning device during a scanning process of a target object, wherein a surface of the target object is provided with mark points, and structured light is projected onto the surface of the target object during at least part of the scanning process of the target object;performing three-dimensional reconstruction of the mark points based on image frames in the image frame set that comprise a mark point pattern, to obtain multiple frames of mark point three-dimensional point cloud, and determining a first coordinate transformation relationship between the multiple frames of the mark point three-dimensional point cloud;performing three-dimensional reconstruction of the target object based on image frames in the image frame set that comprise a structured light pattern, to obtain multiple frames of target object three-dimensional point cloud;for each frame of the target object three-dimensional point cloud, determining a second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and one frame of the mark point three-dimensional point cloud, and stitching the multiple frames of the target object three-dimensional point cloud based on the first coordinate transformation relationship and the second coordinate transformation relationship. According to a second aspect of embodiments of the present application, a three-dimensional reconstruction apparatus is provided, the three-dimensional reconstruction apparatus includes: an acquisition module, configured to acquire an image frame set collected by a three-dimensional scanning device during a scanning process of a target object, wherein a surface of the target object is provided with mark points, and structured light is projected onto the surface of the target object during at least part of the scanning process of the target object;a three-dimensional reconstruction module, configured to perform three-dimensional reconstruction of the mark points based on image frames in the image frame set that comprise a mark point pattern, to obtain multiple frames of mark point three-dimensional point cloud, and determine a first coordinate transformation relationship between the multiple frames of the mark point three-dimensional point cloud; and perform three-dimensional reconstruction of the target object based on image frames in the image frame set that comprise a structured light pattern, to obtain multiple frames of target object three-dimensional point cloud;a stitching module, configured to, for each frame of the target object three-dimensional point cloud, determine a second coordinate transformation relationship between the frame of the target object three-dimensional point cloud and one frame of the mark point three-dimensional point cloud, and stitch the multiple frames of the target object three-dimensional point cloud based on the first coordinate transformation relationship and the second coordinate transformation relationship. According to a third aspect of embodiments of the present application, an electronic device is provided, where the electronic device includes a processor, a memory, and computer instructions stored in the memory and executable by the processor, where the processor implements the method mentioned in the first aspect when executing the computer i