Search

CN-121999140-A - Space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency

CN121999140ACN 121999140 ACN121999140 ACN 121999140ACN-121999140-A

Abstract

The invention belongs to the field of space target imaging, and discloses a space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency, which is used for solving the problems of poor space target imaging quality and inaccurate pose estimation. The invention jointly completes the requirements of improving the quality of input images, adjusting the pose and training a three-dimensional model, firstly uses the pretreatment based on lucky imaging for the low-quality imaging of an incomplete imaging system, then uses the pretraining network based on no detector to extract dense characteristic points, carries out estimation of three-dimensional initial point cloud and camera initial pose, and combines RANSAC and Bundle Adjustment to carry out pose joint adjustment. During training, the deviation of pose adjustment is calculated by using an MLP-based optimizer, training of all parameters of the three-dimensional Gaussian primitive is performed at the same time, and after various physical constraints are added, one-stop pipelining can be implemented. The invention has good noise immunity while keeping the shape of the model foundation good and the edge sharp, and simultaneously has real-time performance and robustness.

Inventors

  • DI JIANGLEI
  • Zhong Yale
  • ZHANG HUAN
  • Dou Jiazhen
  • TANG JU
  • QIN YUWEN

Assignees

  • 广东工业大学

Dates

Publication Date
20260508
Application Date
20260327

Claims (3)

  1. 1. A space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency is characterized by comprising the following steps of: s1, carrying out feature extraction and matching on an image sequence to be processed to obtain matching point pairs among all frames of images, and carrying out motion recovery structure SfM processing based on the matching point pairs to obtain initial three-dimensional coordinates of a space target Camera view parameter list Wherein the viewing angle parameter of the ith camera is defined as a transform matrix , Is the position of the center of the camera, Is the rotation matrix of the camera; S2, by And As input, combining brightness-transparency constraints, performing three-dimensional gaussian primitive parameter training from coarse to fine with edge perception and background suppression control; S3, performing post-processing on the three-dimensional Gaussian primitives, and removing outlier noise and dark Gaussian primitives polluted by the background; s4, after the step S3 is finished, the existing camera visual angle parameters are obtained after the three-dimensional reconstruction model is built Performing continuation and interpolation to obtain camera view angle parameters of unknown new view points By inputting camera visual angle parameters of any new visual point, the 3DGS model can render brightness, depth, normal and transparency information of a space target corresponding to the visual point and has the capability of expressing three-dimensional structure information of the space target.
  2. 2. The method for three-dimensional reconstruction of a spatial target with combined brightness and transparency to suppress dark interference according to claim 1, wherein the specific steps in step S2 are as follows: s2-1, the camera center after Bundle Adjustment optimization And three-dimensional point cloud As a training starting point; s2-2, performing differential GPU-based accelerated forward propagation, and rendering an image corresponding to a camera view angle Absolute depth of Comprehensive transparency ; S2-3, the loss function is defined as: transparency control loss Wherein So that the transparency of the mth Gaussian primitive corresponds to Is suppressed to specific brightness The value of the lower value of the sum of the values, Representing weights for screening points of lower brightness so that the loss is only for gaussian primitives of black noise, i.e. brightness Below the minimum threshold Is partly validated; size control loss function Representing suppression of the overall size of the mth gaussian primitive, where Representing the first Gaussian primitive covariance matrix A feature value representing the magnitude of each direction of the Gaussian primitive, wherein Representing enhanced suppression of larger size gaussian primitives, The modulus length of the covariance matrix is input, and t is a threshold super-parameter for judging the size of the size; background loss function Wherein Is the background of the image of the i-th camera view angle obtained by image segmentation, A cumulative transparency map calculated for the volume rendering; color control function Representing uniform, ideal output gray scale for each channel output of a rendered image, wherein 、 、 Three channel values of RGB for the pixel at the (u, v) position; overall loss function Wherein 、 、 、 、 Hyper-parameters of the loss functions of structural similarity, transparency, gaussian primitive size, background, color control, respectively; s2-4, carrying out back propagation on Gaussian primitive parameters including positions Transparency and transparency Covariance matrix Color of Gradient descent learning is performed.
  3. 3. The method for three-dimensional reconstruction of a spatial target with combined brightness and transparency to suppress dark interference according to claim 1, wherein in step S2, after training three-dimensional gaussian primitives is completed, outlier gaussian primitives are removed by spatial distance clustering operation, and then brightness is removed Below the minimum threshold The threshold is calculated by an adaptive method.

Description

Space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency Technical Field The invention relates to the field of space target imaging, in particular to a space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency. Background The space target imaging and three-dimensional reconstruction technology has the leading edge property in the aspects of detection and identification in the aviation field, but is limited by a space target long-distance imaging system, and the imaging quality is also severely limited. By a series of image processing reconstruction methods, imaging with improved quality and a target to be detected can be recovered from a series of time-sequence photographed high-dynamic images. The main methods of three-dimensional reconstruction comprise Structure from Motion (SfM) point cloud construction, voxel construction, pix2Vox, MVSNet, nerve radiation field (NeRF), three-dimensional Gaussian sputtering (3 DGS) and the like. Wherein the 3DGS algorithm. The 3DGS algorithm is proposed by the national institute of information and automation (INRIA) in 2023, maintains a ray transparency weighted summation rendering method in a nerve radiation field, but is based on rendering and construction of 3D Gaussian primitives, and has real-time performance in the reconstruction process under the condition of no high computational power requirement of a large-scale neural network, the speed is far higher than that of the nerve radiation field, the accuracy is good and bad, and the algorithm serving as a core in the current three-dimensional reconstruction task is widely applied. In the process of improving the algorithm, the non-texture and low-characteristic blurred image, single object and simple background scene are not specially optimized, and particularly effective three-dimensional reconstruction is difficult to perform for long-distance and high-dynamic space target imaging. Disclosure of Invention The invention aims to overcome the defects of the prior art and provide a space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency, and compared with the traditional Colmap-based feature extraction and 3DGS (three-dimensional gradient search) assembly line, the improved algorithm can construct a space target three-dimensional model with high resolution and a clean background, and meanwhile, sharp and clear edges are reserved. The technical scheme for solving the technical problems is as follows: A space target three-dimensional reconstruction method for inhibiting dark interference by combining brightness and transparency comprises the following steps: S1, preprocessing a low-quality input image sequence of a space object. Extracting dense characteristic points and characteristic descriptors from the preprocessed sharp images by using a SuperPoint method, performing characteristic matching by using SuperGlue to obtain paired matched two-dimensional points, performing triangular calculation on the matched two-dimensional points to obtain three-dimensional points of back projection and camera pose corresponding to each image, and finally adjusting and optimizing the three-dimensional points and camera visual angle parameter transformation matrixes by using a Bundle Adjustment method ; S2, byAndAs input, performing three-dimensional gaussian primitive parameter training from coarse to fine with edge perception and background suppression control; S3, performing post-processing on the three-dimensional Gaussian primitives, and removing outlier noise and dark Gaussian primitives polluted by the background; S4, after the step S3 is finished, namely after the three-dimensional reconstruction model is built, the existing camera visual angle parameters are obtained Performing continuation and interpolation to obtain camera view angle parameters of unknown new view points. By inputting camera visual angle parameters of any new visual point, the 3DGS model can render brightness, depth, normal and transparency information of a space target corresponding to the visual point and has the capability of expressing three-dimensional structure information of the space target. Preferably, in step S2, the specific steps are as follows: S2-1, input camera viewing angle parameters In three-dimensional point cloudAs a training start point. S2-2, performing differential GPU-based accelerated forward propagation, and rendering an image of a corresponding view angleAbsolute depth ofComprehensive transparency。 S2-3, the loss function is defined as: transparency control loss WhereinSo that the transparency of the mth Gaussian primitive corresponds toIs suppressed to specific brightnessThe value of the lower value of the sum of the values,Representing weights for screening points of lower brightness so that the loss is only for gaussian