Search

CN-122023711-A - Passive three-dimensional modeling technology without light source

CN122023711ACN 122023711 ACN122023711 ACN 122023711ACN-122023711-A

Abstract

The invention discloses a passive three-dimensional modeling technology without a light source, and belongs to the technical field of computer vision and three-dimensional reconstruction. The method comprises the steps of collecting multi-view images under ambient light to meet the requirements of preset overlapping degree and resolution, extracting and matching image characteristics, carrying out stereoscopic vision reconstruction based on epipolar constraint and triangulation to generate sparse point cloud, carrying out beam adjustment and poisson surface reconstruction optimization and generating dense three-dimensional grids, carrying out texture mapping and model post-processing, and finally carrying out quality verification on the model through a multi-level precision control system. The invention does not need an active emission light source, has low cost and strong adaptability, is particularly suitable for large-scale and high-precision three-dimensional reconstruction of buildings, cities and military facilities, and obviously reduces hardware and time cost while guaranteeing centimeter-level modeling precision.

Inventors

  • HE XIANGYU

Assignees

  • 何祥宇
  • 广东空天抗扰技术研究院有限公司

Dates

Publication Date
20260512
Application Date
20251214

Claims (10)

  1. 1. A passive three-dimensional modeling technique without a light source, which is characterized by comprising the following steps: Collecting multi-view scene images under the illumination of the environment, wherein the image collection meets the conditions that the minimum overlapping degree is more than or equal to 60% in the longitudinal direction and more than or equal to 30% in the transverse direction, and the resolution ratio is not lower than 4K; extracting and matching features of the acquired images, and adopting at least one of Scale Invariant Feature Transform (SIFT), speeded-up robust features (SURF) or directional FAST and rotation BRIEF (ORB) algorithms; Based on epipolar constraint and a triangulation principle, performing stereoscopic vision reconstruction to generate a sparse three-dimensional point cloud; Optimizing and reconstructing the sparse point cloud into a dense three-dimensional grid model through beam adjustment (Bundle Adjustment) and a poisson surface reconstruction algorithm; Mapping the acquired image texture to the surface of the three-dimensional grid model to generate a textured three-dimensional model; and performing accuracy verification and quality control on the reconstructed three-dimensional model, and ensuring that the geometric error and texture resolution of the reconstructed three-dimensional model meet preset standards.
  2. 2. The method of claim 1, wherein in the image acquisition step: Shooting by adopting a circular surrounding track or an 8-shaped track; Closing an electronic anti-shake function in the shooting process, and locking exposure and focusing; The shooting distance is 1.5-3 meters, and the shooting speed is 0.3-0.5 m/s.
  3. 3. The method according to claim 1, wherein in the feature extraction and matching step: Removing mismatching by adopting a RANSAC algorithm; performing feature matching by using FLANN or BFMatcher; and in the matching process, similarity measurement is carried out according to Euclidean distance or Hamming distance.
  4. 4. The method according to claim 1, wherein in the stereoscopic reconstruction step: Simplifying the matching search space by utilizing the epipolar geometry relation; calculating the three-dimensional coordinates of the corresponding pixels through triangulation; And an optimal correction method is adopted to minimize the re-projection error.
  5. 5. The method according to claim 1, wherein in the point cloud optimization step: processing the large-scale point cloud using an Incremental Poisson Surface Reconstruction (IPSR) algorithm; Setting the reconstruction depth to be 8-11 levels, the point weight to be 2.0 and the resolution to be 0.4; and removing the point cloud noise through voxel filtering or statistical filtering.
  6. 6. The method of claim 1, wherein in the texture mapping step: adopting COLMAP texture mapping flow, selecting the best source image based on the view; Optimizing texture seams using a graph cut algorithm; support real-time texture mapping and generate texture atlas.
  7. 7. The method of claim 1, wherein the precision verification and quality control step comprises: Model precision classification is carried out according to the L1-L7 grades in the technical guidelines of urban information model (CIM) basic platform; adopting a quality control strategy of '3-2-1', namely 3 times of parameter adjustment, 2 rounds of generation and 1 time of manual inspection; uncertainty quantization and error assessment were performed using A Contrario theory.
  8. 8. The method of claim 1, wherein the method is suitable for military installation modeling, satisfying the following requirements: Modeling proportion is 1:1, length resolution precision is not lower than 0.01mm, and angle resolution precision is not lower than 0.0001 degrees; The model structure is clear, and the rendering resolution is not lower than 1080P; and the structural modeling of special functions such as nuclear and biochemical prevention, reconnaissance prevention and monitoring is supported.
  9. 9. A passive three-dimensional modeling technique without a light source, comprising: The image acquisition module is used for acquiring a multi-view scene image; The feature processing module is used for extracting and matching the image features; the reconstruction module is used for executing stereoscopic vision reconstruction and point cloud generation; the optimizing module is used for point cloud optimization and grid reconstruction; The texture module is used for texture mapping and model rendering; And the quality control module is used for model accuracy verification and error correction.
  10. 10. The system of claim 9, wherein the system is integrated in a mobile terminal, a drone or a cloud server, supporting distributed computing and real-time three-dimensional reconstruction.

Description

Passive three-dimensional modeling technology without light source Technical Field The invention relates to the technical fields of computer vision, photogrammetry and three-dimensional reconstruction, in particular to a passive three-dimensional modeling method and system without a light source, which are suitable for large-scale scene three-dimensional model construction in the fields of building, urban planning, digital twin, military simulation and the like. Background With the development of smart cities, digital twinning and military digitization, the demand for high-precision three-dimensional models for large-scale scenes is increasingly urgent. Traditional three-dimensional modeling techniques are largely divided into two categories, active scanning (e.g., lidar) and passive visual reconstruction. The active scanning technology has high precision, but the equipment is expensive, the operation is complex, the active scanning technology is not suitable for outdoor large-scale operation, and the active scanning technology is easy to be interfered by ambient light. The passive visual reconstruction technology mainly relies on an image sequence under ambient illumination, recovers a three-dimensional structure through a multi-view geometric algorithm, has the advantages of low cost, high flexibility, suitability for large-scale scenes and the like, and has become a hot spot for current research and application. However, the existing passive three-dimensional modeling technology still has many challenges, such as large influence on reconstruction accuracy due to image quality, illumination condition and feature matching accuracy, difficulty in stably meeting centimeter-level modeling requirements, large calculated amount, long time consumption, complicated post-processing flows of point cloud and grid optimization, texture mapping and the like in a large-scale scene reconstruction process, lack of a systematic accuracy control system and quality control flow, uneven quality of model achievements, and lack of a specific technical scheme in the aspects of high-accuracy, high-confidentiality and special functional structure modeling aiming at special application scenes such as military facilities. Therefore, there is a need for an integrated, high-precision, high-efficiency passive three-dimensional modeling method without light source with strict quality control. Disclosure of Invention Object of the Invention The invention aims to provide a passive three-dimensional modeling technology without a light source, which solves the technical problems of unstable reconstruction precision, low large-scale scene processing efficiency, lack of systematic quality control and difficulty in meeting the scene requirements with high precision such as military and the like in the prior art. Technical proposal A passive three-dimensional modeling technology without a light source comprises an image acquisition module, a characteristic processing module, a three-dimensional reconstruction module, an optimization module, a texture module and a quality control module. The image acquisition comprises the steps of designing a shooting track (such as a circular surrounding or 8-shaped track), acquiring a multi-view image sequence of a target scene under ambient light by using common imaging equipment, controlling the image overlapping degree, resolution, shooting distance and motion parameters, and closing electronic anti-shake, locking exposure and focusing. And extracting and matching features, namely extracting image feature points by adopting SIFT, SURF or ORB algorithm, generating descriptors, performing feature initial matching by using FLANN or a violent matcher, and eliminating mismatching point pairs by combining with RANSAC algorithm. And (3) reconstructing stereoscopic vision and generating a sparse point cloud, namely calculating a basic matrix and an essential matrix by utilizing the matched characteristic points based on epipolar geometric constraint, recovering camera motion parameters (pose), and calculating three-dimensional space coordinates of the characteristic points by using a triangulation principle to form an initial sparse point cloud. And (3) optimizing the point cloud and reconstructing the density, namely performing beam adjustment (Bundle Adjustment) optimization on the sparse point cloud, and minimizing the re-projection error. Subsequently, a dense point cloud is generated using multi-view stereo (MVS) techniques and reconstructed into a continuous triangular mesh surface using poisson surface reconstruction or an incremental improvement algorithm (IPSR) thereof. And (3) texture mapping, namely calculating UV coordinates for the grid model, selecting an optimal view angle from an original image for color sampling, optimizing texture joints by adopting algorithms such as graph cut and the like, generating a texture atlas, and mapping the texture atlas to the surface of the grid to form a three-dime