CN-121982258-A - Live-action three-dimensional modeling method for oblique photography and laser point cloud multi-source air-air three-fusion
Abstract
The invention relates to the technical field of three-dimensional modeling, and discloses a live-action three-dimensional modeling method by fusion of oblique photography and laser point cloud multi-source air, which comprises the following steps of S101, S102, S103, S104, S105, re-parameterizing local texture parameter coordinates, S106, and outputting a live-action three-dimensional model, wherein the local Gao Chengchang is constructed and the slope direction is calculated, the vertex cracking chain is extracted, the drift driving function is fitted, the pose and the geometric coordinates of a camera are updated. The method is suitable for the canyon height difference and corridor weak geometric scenes, and coordinates the complementary advantages of oblique photographic images and laser point clouds through the coherent flow of multi-source data space-time unification, structural feature extraction, geometric observation fitting, joint optimization and topology restoration, so that the cross-modal drift caused by the cross-modal drift is inhibited in a targeted manner, grid topology cracks and vertex cracking are eliminated, and the adaptability of texture mapping and geometric structures is improved.
Inventors
- YAN ZHENJUN
- CHEN XI
- HUANG XIAOLI
- YU BIAO
- DUAN XUELI
- Kang Mengbin
- ZHANG WENJUN
- GAO YUBO
Assignees
- 滁州学院
Dates
- Publication Date
- 20260505
- Application Date
- 20260129
Claims (10)
- 1. The real-scene three-dimensional modeling method for the fusion of oblique photography and laser point cloud multi-source air three is characterized by comprising the following steps of: Step S101, acquiring oblique photographic images, camera internal parameters and laser point clouds, calculating the initial camera pose, unifying the laser point clouds to a world coordinate system, constructing a local Gao Chengchang, and calculating the slope direction; Step S102, generating a triangular grid with texture parameter coordinates based on the initial camera pose, and extracting a vertex cracking chain with the largest accumulated cracking energy according to the cracking degree of the vertex in a texture parameter space and the orthogonality of a joint edge and a slope direction; Step S103, calculating projections of triangular patch centroid differences at two sides of a joint edge in an apex cracking chain in a slope direction, obtaining slope shearing jump observation, establishing a one-dimensional position sequence along a corridor trend, and fitting a drift driving function comprising second-order smoothness and regularity; Step S104, constructing a combined objective function comprising an image re-projection error, a laser point-to-surface residual error and a smooth term, superposing a drift driving function on the center of the camera along a slope direction to correct the pose, and executing combined solution to update the pose and the geometric coordinates of the camera; Step S105, determining an adaptive scale according to the median of the nearest neighbor distance of the laser point cloud, merging the spatially overlapped cracked vertex instances, and re-parameterizing local texture parameter coordinates according to the consistency of the three-dimensional side length and the texture side length proportion; And S106, calculating the weighting weight of each view angle image to the surface patch by using the updated camera pose based on the repaired grid topology and the local texture parameter coordinates, executing multi-view texture fusion and outputting a real-scene three-dimensional model.
- 2. The three-dimensional modeling method for a live-action of fusion of oblique photography and laser point cloud multi-source air as claimed in claim 1, wherein a plurality of oblique photography images are collected, a camera reference matrix is set for each oblique photography image, laser point clouds are collected, and three-dimensional coordinates of each laser point under a laser sensor coordinate system are recorded; Performing beam adjustment on each oblique photographic image, establishing a projection relation between any one spatial point three-dimensional coordinate and an image point homogeneous image plane coordinate on the Zhang Qingxie photographic image, calculating a rotation matrix and a translation vector by minimizing projection residual errors of all image points, and forming an initial camera pose by the rotation matrix and the translation vector; Recording a sampling time for each laser point, determining a carrier posture rotation matrix and a carrier translation vector under a carrier coordinate system at the sampling time, determining a fixed rotation matrix and a fixed translation vector of a laser sensor relative to a carrier, and converting each laser point to a world coordinate system according to a rigid transformation relationship to obtain a laser point three-dimensional coordinate under the world coordinate system; selecting a local subset of the laser point cloud under a world coordinate system, expressing a local elevation field by using a plane function, and solving coefficients of the plane function by least square constraint to construct the local elevation field; and determining a horizontal gradient vector according to the local elevation field, normalizing the horizontal gradient vector to obtain a unit vector of the slope direction on the horizontal plane, and expanding the unit vector of the slope direction on the horizontal plane to a three-dimensional space to obtain the slope direction.
- 3. The three-dimensional modeling method of the real scene with the multi-source air-three fusion of the oblique photography and the laser point cloud according to claim 1 is characterized in that an initial camera pose set and an oblique photography image are utilized, a dense three-dimensional point set is obtained through multi-view geometric reconstruction, a triangular mesh is constructed on the dense three-dimensional point set, a vertex set and a triangular patch set are recorded, three-dimensional coordinates are calculated for each vertex in the vertex set, texture parameter coordinates are given to each vertex through projection and texture expansion, a triangular mesh with the texture parameter coordinates is obtained, the number of instances of the vertex in a texture parameter space is counted for each vertex, the number of adjacent triangular patches of the vertex in the geometric mesh is counted, and the cracking degree of the vertex in the texture parameter space is defined by the ratio of the number of the instances to the number of the adjacent triangular patches.
- 4. The method for three-dimensionally modeling a live-action by fusion of oblique photography and laser point cloud multi-source air as claimed in claim 3, wherein edges on different block boundaries in the texture parameter map are added into a joint edge set, three-dimensional direction vectors are defined for any joint edge, and orthogonality coefficients of the joint edge and the slope direction are calculated by using the three-dimensional direction vectors and the slope direction vectors; And selecting a connected joint edge set from the joint edge sets, calculating the sum of the joint edge cracking energy in the connected joint edge sets to obtain accumulated cracking energy, determining the largest accumulated cracking energy set from all the connected joint edge sets, and forming a vertex cracking chain with the largest accumulated cracking energy by the largest accumulated cracking energy set.
- 5. The three-dimensional modeling method of the real scene of the multi-source air-air fusion of oblique photography and laser point cloud according to claim 1, wherein for each joint edge in a vertex cracking chain, two triangular patches adjacent to the joint edge in a triangular patch set are determined, and the average value of three vertex three-dimensional coordinates of the two triangular patches is calculated to obtain the mass centers of the two triangular patches; Calculating the center of mass of the laser point cloud under the world coordinate system, constructing a covariance matrix by utilizing the three-dimensional coordinates of each laser point in the laser point cloud and the center of mass of the laser point cloud, carrying out feature decomposition on the covariance matrix, selecting a feature vector corresponding to the maximum feature value and carrying out normalization to obtain a unit vector of the trend of the corridor; The method comprises the steps of constructing a target functional, wherein the target functional comprises a data fitting item and a second-order smooth regular item, the data fitting item is the sum of square differences of values of slope shearing jump observation and a drift driving function at corresponding one-dimensional positions, the second-order smooth regular item is the product of a smooth regular weight coefficient and the integral of the drift driving function to the square of the second derivative of the one-dimensional positions, and the drift driving function is determined by minimizing the target functional.
- 6. The three-dimensional modeling method for the real scene by combining oblique photography with laser point cloud multi-source air fusion according to claim 1 is characterized in that for each camera in an initial camera pose set, a camera center under a world coordinate system is calculated by using a rotation matrix of the camera and a translation vector of the camera, a one-dimensional position of the camera center in a corridor direction is calculated by using a unit vector of a corridor trend and a laser point cloud centroid, a corrected camera center is calculated by using a value of a drift driving function in the one-dimensional position and a slope vector, and the corrected camera center is written back to a translation term to obtain an updated camera translation vector, and an updated camera pose is formed by the rotation matrix of the camera and the updated camera translation vector.
- 7. The three-dimensional modeling method for a live-action of multi-source air-space fusion of oblique photography and laser point cloud according to claim 6, wherein a projection relation between three-dimensional point geometric coordinates and projection image points on an oblique photography image is established by using an internal camera reference matrix and updated camera pose, and the square sum of differences between the image point homogeneous coordinates and the projection image points is calculated as an image re-projection error; calculating the sum of squares of the distances between the laser points and the local plane as laser point-to-plane residual errors by utilizing the laser point cloud under the world coordinate system and associating the local plane parameter vector with each laser point; and carrying out weighted combination on the image re-projection error, the laser point-to-plane residual error and the smooth item to obtain a combined objective function, and solving to obtain an updated camera pose set and an updated three-dimensional point geometric coordinate set by minimizing the combined objective function.
- 8. The three-dimensional modeling method for the real scene of the multi-source air-space fusion of the oblique photography and the laser point cloud according to claim 1 is characterized in that for each laser point in the laser point cloud under a world coordinate system, calculating the nearest neighbor distance between each laser point and other laser points, calculating the median of the nearest neighbor distances of all the laser points to obtain the median of the nearest neighbor distances of the laser point cloud; In the triangular mesh with texture parameter coordinates, calculating a three-dimensional distance for any pair of examples in a cracking example set of each geometric vertex, merging the pair of examples into a single geometric vertex when the three-dimensional distance is smaller than or equal to an adaptive scale, calculating an average value of the three-dimensional coordinates of the pair of examples as the three-dimensional coordinates of the merged single geometric vertex, and completely associating the local triangular patches originally belonging to the pair of examples to the merged single geometric vertex in the geometric structure; Selecting a local triangular patch set containing combined single geometric vertexes, calculating the ratio of the sum of the grain side lengths of all sides in the local triangular patch set to the sum of the three-dimensional side lengths of all sides for each side in the local triangular patch set to obtain a local scale factor, constructing a local grain parameter coordinate repartitioning objective function, wherein the local grain parameter coordinate repartitioning objective function is the sum of squares of the difference of the grain side lengths minus the product of the local scale factor and the three-dimensional side lengths, and solving the local grain parameter coordinates by minimizing the local grain parameter coordinate repartitioning objective function.
- 9. The method for three-dimensional modeling of a real scene by fusion of oblique photography and laser point cloud multi-source air as claimed in claim 1, wherein the method is characterized in that, by utilizing the repaired grid topology, for each triangular patch in a triangular patch set, the average value of three vertex three-dimensional coordinates of the triangular patch is calculated to obtain a patch center point; And constructing a sight line direction vector pointing to the center point of the surface patch from the camera center, calculating the modular length of the sight line direction vector to obtain the distance from the camera to the surface patch, and normalizing the sight line direction vector to obtain a sight line direction unit vector.
- 10. The three-dimensional modeling method of the real scene of the fusion of oblique photography and laser point cloud multi-source air as claimed in claim 9, wherein the inner product of the normal vector of the face sheet and the unit vector of the line of sight direction is calculated, and the larger value between the inner product and zero is taken as the cosine of the incident angle; the method comprises the steps of mapping triangular patches to each view angle image by utilizing local texture parameter coordinates and updated camera pose to obtain color vectors, carrying out weighted average on the color vectors of each view angle image by utilizing weighted weights to obtain final color vectors, constructing a textured three-dimensional grid by utilizing the geometric coordinates, topological connection relation and final color vectors of each triangular patch, and outputting a real-scene three-dimensional model.
Description
Live-action three-dimensional modeling method for oblique photography and laser point cloud multi-source air-air three-fusion Technical Field The invention relates to the technical field of three-dimensional modeling, in particular to a live-action three-dimensional modeling method by fusion of oblique photography and laser point cloud multi-source air. Background In the field of live-action three-dimensional modeling, a fusion technology of oblique photographic images and laser point clouds has become a core means for obtaining a high-precision scene model. The oblique photographic image can provide abundant high-resolution texture information and clearly restore scene surface details by virtue of multi-view shooting advantages, and the laser point cloud has strong geometric constraint capability, can accurately capture a three-dimensional structure in weak texture, shadow and repeated texture areas, can obviously improve the integrity and expressive force of a model by complementation of the two, and is widely applied to modeling tasks of complex terrains such as canyons, corridor and the like. However, the canyon and corridor scenes have remarkable specificity, the topography height difference is large, the space trend is clear, and the canyon and corridor scenes are in long and narrow forms, so that a weak geometric constraint environment is easy to form. In such an environment, scene feature points are sparsely distributed and are difficult to match, so that the beam adjustment constraint is insufficient when the pose of a camera is resolved, and system drift is easy to generate along the corridor direction. The existing fusion method has obvious limitation, and the laser point cloud is used as a registration reference or a geometric hole filling tool, a special technical process is not designed aiming at the weak geometric characteristics of the scene, and a correlation mechanism of structural abnormality and geometric constraint is not established. In the cross-modal data fusion process, the system drift of the camera pose is further amplified, and the linkage problem is directly caused. In the texture unfolding stage, the grid topology is uncoordinated with the texture mapping caused by drift, and a seam intensive band appears, so that the vertex cracking phenomenon is caused, namely, the same geometric vertex is copied into a plurality of examples in a texture parameter space. The cracking vertexes are associated with the dense joint edges to form geometric staggered layers along the slope direction, meanwhile, the grids generate topological cracks, and the textures are obviously cracked due to vertex cracking. In addition, the existing method can not convert the structural anomalies into quantifiable geometric observations, the joint optimization process lacks pertinence constraint, drift solidification can not be effectively restrained, and finally generated models have poor geometric continuity and insufficient texture consistency, so that the requirements of practical application such as engineering measurement, scene visualization and the like on model precision are difficult to meet. Disclosure of Invention The invention provides a live-action three-dimensional modeling method for fusion of oblique photography and laser point cloud multi-source air three, which solves the technical problems in the background technology. The invention provides a live-action three-dimensional modeling method by fusion of oblique photography and laser point cloud multi-source air, which comprises the following steps: Step S101, acquiring oblique photographic images, camera internal parameters and laser point clouds, calculating the initial camera pose, unifying the laser point clouds to a world coordinate system, constructing a local Gao Chengchang, and calculating the slope direction; Step S102, generating a triangular grid with texture parameter coordinates based on the initial camera pose, and extracting a vertex cracking chain with the largest accumulated cracking energy according to the cracking degree of the vertex in a texture parameter space and the orthogonality of a joint edge and a slope direction; Step S103, calculating projections of triangular patch centroid differences at two sides of a joint edge in an apex cracking chain in a slope direction, obtaining slope shearing jump observation, establishing a one-dimensional position sequence along a corridor trend, and fitting a drift driving function comprising second-order smoothness and regularity; Step S104, constructing a combined objective function comprising an image re-projection error, a laser point-to-surface residual error and a smooth term, superposing a drift driving function on the center of the camera along a slope direction to correct the pose, and executing combined solution to update the pose and the geometric coordinates of the camera; Step S105, determining an adaptive scale according to the median of the nearest neighbor distance of