CN-122023648-A - Real-time three-dimensional reconstruction method and system based on unmanned equipment
Abstract
The invention provides a real-time three-dimensional reconstruction method and system based on unmanned equipment. Aiming at the hysteresis problem that the traditional reconstruction needs acquisition before reconstruction and the composite bottleneck that the onboard calculation force is limited and the bandwidth of a weak network is insufficient, the invention generates a unified time signal through a hardware synchronization module to realize microsecond synchronization of multi-source data, an onboard end respectively constructs a batch processing pipeline of point cloud octree compression, image JPEG coding and IMU byte alignment aiming at the heterogeneous data characteristics, the calculation force cost is reduced, the data volume is obviously compressed, a QoS mechanism of an MQTT protocol and a local persistence cache are introduced by a transmission layer, and the reliable arrival of data in network fluctuation is ensured through a breakpoint continuous transmission technology. In conclusion, the invention realizes the end-to-end real-time processing of the software and hardware cooperation, effectively reduces the bandwidth occupation, and realizes the stable and high-precision real-time three-dimensional reconstruction in the weak network environment.
Inventors
- YANG BAILIN
- TANG XUETAO
- QIU FANGCHENG
Assignees
- 浙江工商大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260119
Claims (10)
- 1. The real-time three-dimensional reconstruction method based on the unmanned equipment is characterized by comprising the following steps of: S1, a hardware synchronization module controls image acquisition and performs space-time alignment of multi-source data; s2, the airborne computing equipment receives the multi-source data, marks a time stamp for the multi-source data by using a unified time signal and issues the multi-source data on a local machine; s3, performing real-time correspondence processing on the multi-source data marked with the time stamp by adopting a multithreading parallel architecture; S4, using a message transmission protocol to send the multisource data processed in the S3 to a reconstruction end in real time by the airborne computing equipment; And S5, the reconstruction end receives and analyzes the data, aligns the data according to the time stamp, inputs the data into a real-time reconstruction algorithm and generates a three-dimensional model in real time.
- 2. The method according to claim 1, wherein S1 comprises: The hardware synchronization module generates a first trigger signal to control the camera to acquire images at a predetermined frequency, generates a second trigger signal to calibrate a lidar clock, and generates a uniform time signal to send to the on-board computing device.
- 3. The method according to claim 2, wherein S2 is specifically: The hardware synchronization module continuously transmits an analog time signal to the airborne computing device; the ROS drive of the airborne terminal analyzes the analog time signal when receiving the multi-source data; And writing a unified hardware time reference into a time stamp of each frame of ROS message, and realizing microsecond time-space alignment of multi-source data at a physical acquisition source.
- 4. A method according to any of claims 1-3, wherein the real-time correspondence processing comprises: aiming at the laser radar point cloud data, an asynchronous task pool is started to perform preprocessing and octree compression; For camera image data, adopting a lossy compression strategy to balance image quality and bandwidth; for IMU data, a batch processing strategy is used to boost the payload duty cycle.
- 5. The method according to claim 4, wherein the processing manner of the laser radar point cloud data is specifically as follows: Using a multithreaded scheduler to obtain data from the message queue and to launch a pool of worker threads to execute the point cloud processing pipeline in parallel: Removing invalid points and filtering out noise points with the distance from the origin exceeding a threshold value; Calculating the centroid of the point cloud and performing de-averaging to improve the compression precision; the point cloud is encoded into a binary stream based on an octree spatial partitioning structure.
- 6. The method of claim 4, wherein the batch processing strategy operates in a manner that includes: in the serialization phase, a compact data structure is defined that contains time stamp, direction, angular velocity, and linear acceleration fields; performing memory layout optimization on the data structure by adopting 1 byte alignment so as to eliminate memory stuffing bytes; and continuously splicing a preset number of data structure examples into a single binary message packet.
- 7. The method according to claim 1, wherein S4 comprises: the communication transmission module sends the multisource data processed in the step S3 to a reconstruction end, a transmission protocol is customized, a double-precision floating point timestamp is written into an MQTT message load head, and compressed binary data bodies are spliced; the transmission layer issues messages through the MQTT protocol, configures the quality of service level, and enables a local persistence buffer mechanism: And once the network is recovered, the background service automatically retransmits the historical data so as to ensure the integrity of the whole flow data and prevent the reconstruction track from being broken due to frame loss.
- 8. The method of claim 7, wherein S5 comprises: s5.1, a reconstruction end is used as an MQTT subscriber, monitors related topics in real time, and executes a unpacking flow which is reciprocal to the airborne computing equipment after receiving the data packet; and S5.2, subscribing the local ROS topics decompressed by the reconstruction end, performing time alignment according to the time stamp, and generating a reconstruction result by adopting a real-time reconstruction algorithm.
- 9. The method according to claim 8, wherein S5.2 is specifically: the system enters a reconstruction stage, and the back end runs a real-time reconstruction algorithm; subscribing the local ROS topics decompressed by the reconstruction end, and performing time alignment on the multi-source data according to the unified hardware time stamp carried in each data frame; forward propagation prediction of states and point cloud motion de-distortion are completed in a ImuProcess module by using IMU data; The error state iterative Kalman filtering framework core estimation state is adopted, and the same state vector is updated alternately through point-plane geometric residual errors and luminosity errors; and issuing the pose, the track and the reconstruction point cloud which are solved in real time through the ROS interface, and displaying the reconstruction result in real time in the visual interface.
- 10. A real-time three-dimensional reconstruction system based on unmanned equipment, comprising: The airborne acquisition end is arranged on unmanned equipment and comprises a laser radar, a camera, an IMU and airborne computing equipment; the hardware synchronization module is used for generating a first trigger signal to the camera, generating a second trigger signal to the laser radar and generating a unified time signal to the airborne computing equipment; The airborne processing module is used for adding a time stamp to the multi-source data by using the unified time signal and carrying out real-time corresponding processing on the data; the wireless transmission module is used for providing network connection between the airborne acquisition end and the message server; the message server is used for receiving the published data from the onboard processing module and forwarding the published data to a client subscribing to the corresponding theme; And the reconstruction receiving module is used for subscribing the message server to receive data and running a three-dimensional reconstruction algorithm in real time.
Description
Real-time three-dimensional reconstruction method and system based on unmanned equipment Technical Field The invention belongs to the technical field of unmanned equipment mapping and three-dimensional reconstruction, and particularly relates to a multi-sensor (including laser radar, camera and IMU) data fusion, real-time compression, wireless transmission and three-dimensional reconstruction method and system based on an unmanned equipment platform. Background In recent years, with rapid development of unmanned equipment technology and three-dimensional sensors, high-precision three-dimensional reconstruction is performed by using unmanned equipment to mount sensors such as laser radar, cameras and inertial measurement units, and huge application values are shown in the fields of mapping, power inspection and the like. However, the existing three-dimensional reconstruction system of unmanned equipment still faces a significant technical bottleneck in terms of real-time performance, and is mainly characterized in the following two aspects: (1) Serious hysteresis of data acquisition and processing: The traditional operation mode is commonly "acquisition first, processing later". After the unmanned equipment completes data acquisition on site, the unmanned equipment must return and recover data, and then import massive original data into a high-performance workstation for offline processing and reconstruction. This process results in a delay of typically several hours in the acquisition of data results, which cannot meet the real-time application scenario that requires immediate feedback and decision making. (2) Bottlenecks in on-board data processing and network transmission: In order to realize the graph construction while traveling, huge data streams acquired by unmanned equipment must be transmitted to a reconstruction end in real time, and the two core challenges of network bandwidth and on-board computing power are brought. The prior art mainly adopts the following technical routes, but all the prior art have limitations: the first scheme fully deploys the SLAM algorithm on an onboard computer and only sends the processed low-bandwidth track or sparse map to the ground. While this approach circumvents the bandwidth problem through edge computation, it is limited by the power consumption and heat dissipation limitations of the on-board devices, whose computational power (CPU/GPU) is often difficult to support high-precision or high-resolution real-time mapping. Once the algorithm complexity exceeds the onboard computational power upper limit, serious frame rate drop and even system halt can be caused, and the high-precision mapping requirement cannot be met. The second type of scheme claims that data is sent to a high-performance host for reconstruction, but when attempting to solve bandwidth limitation in the transmission process, a general streaming media protocol or a data thinning strategy is often relied on, and both of the two are difficult to meet the severe requirements of real-time SLAM on data integrity: In one aspect, the generic compression standard lacks support for multi-sensor spatiotemporal characteristics. The existing video stream coding or general point cloud compression algorithm is high in calculation cost and easy to introduce encoding and decoding delay of hundreds of milliseconds, and the standard video stream protocol cannot usually carry microsecond hardware trigger time stamps. This directly results in the back-end reconstruction system being unable to align the lidar with the camera data at very high temporal resolution, resulting in track drift of the fusion algorithm. On the other hand, blind data compression damages the geometric structure of the environment, and in order to forcibly adapt to the bandwidth of a weak network, the prior proposal usually adopts a downsampling and other thinning strategies to process the point cloud, which can lead to the loss of tiny characteristics of wires, towers and the like. This lossy process, at the expense of critical geometry and high frequency motion information, directly breaks the continuity of the SLAM front-end odometer, and is very prone to losing track during rapid motion. Disclosure of Invention In order to solve the problem of lag of the traditional reconstruction requiring acquisition before reconstruction and the problem that the prior art cannot perform real-time three-dimensional reconstruction under poor network conditions due to limited onboard calculation force and network bandwidth, the main purpose of the invention is to provide a real-time three-dimensional reconstruction method and system based on unmanned equipment. Based on the above object, a first aspect of the present invention provides a real-time three-dimensional reconstruction method based on unmanned equipment, comprising the steps of: S1, a hardware synchronization module controls image acquisition and performs space-time alignment of multi-source data;