CN-121977542-A - AR navigation-fused three-dimensional engine positioning and navigation method, device and medium
Abstract
The invention discloses a three-dimensional engine positioning and navigation method, equipment and medium for fusing AR navigation, which comprises the steps of obtaining multi-source sensor data, constructing an environment sparse three-dimensional point cloud, constructing and updating a dynamic navigation grid, simultaneously guiding the dynamic navigation grid and a physical collision body into a physical engine arranged in the three-dimensional engine to construct a dynamic digital twin environment model, searching a global optimal path based on the dynamic navigation grid, generating a curve path conforming to the motion habit of a human body through local motion planning smoothing processing, simultaneously triggering real-time re-planning by combining dynamic environment change, generating a path transition animation, carrying out depth test and template buffering based on the environment sparse three-dimensional point cloud and the curve path, driving virtual guide elements through the physical engine, and generating AR navigation guidance through a multi-mode fusion mode of vision, hearing and touch. The invention effectively improves the navigation positioning precision, the environmental adaptability and the user experience.
Inventors
- XUE PENGFEI
- LIU WEICHENG
Assignees
- 深圳市元景数字技术有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20251231
Claims (10)
- 1. The three-dimensional engine positioning and navigation method integrating AR navigation is characterized by comprising the following steps of: s1, acquiring multi-source sensor data and constructing an environment sparse three-dimensional point cloud; S2, identifying scene semantic information in a camera picture through a deep learning model, attaching the scene semantic information to the environment sparse three-dimensional point cloud, constructing and updating a dynamic navigation grid, and simultaneously guiding the dynamic navigation grid and a physical collision body into a physical engine built in a three-dimensional engine to construct a dynamic digital twin environment model; step S3, based on the dynamic navigation grid, sensing through topology Searching a global optimal path by an algorithm, generating a curve path conforming to the motion habit of a human body through the smoothing processing of local motion planning, and simultaneously triggering real-time re-planning by combining dynamic environment changes to generate a path transition animation; and S4, performing depth test and template buffering based on the environment sparse three-dimensional point cloud and the curve path, driving virtual guide elements through the physical engine, and generating AR navigation guide through a multi-mode fusion mode of vision, hearing and touch.
- 2. The three-dimensional engine positioning and navigation method of fusion AR navigation according to claim 1, characterized in that step S1 comprises the following sub-steps: s11, acquiring IMU sensor data, camera real-time image data and a priori map prestored in a three-dimensional engine; step S12, the visual characteristics in the IMU sensor data and the real-time image data of the camera are fused through the depth of a visual inertial odometer, and the relative pose of the degree of freedom of the equipment 6 is output; step S13, based on the relative pose, performing feature matching on the real-time image data of the camera and the prior map, and correcting the accumulated error of the relative pose to form absolute pose; And S14, calculating absolute coordinates of the visual feature points in a three-dimensional space according to the absolute pose and the visual feature points generated in the visual SLAM process, and constructing an environment sparse three-dimensional point cloud in real time.
- 3. The AR navigation fusion three-dimensional engine positioning and navigation method according to claim 2, wherein in step S12, the visual inertial odometer realizes data fusion by extending kalman filtering or nonlinear optimization algorithm.
- 4. The method of claim 2, wherein in step S12, the visual features include corner points, edges and texture features in the image.
- 5. The method for positioning and navigating a three-dimensional engine in combination with AR navigation according to claim 1, wherein in step S2, the dynamic navigation grid marks the identified static obstacle as an unvented area, and a dynamic collision body is assigned to the dynamic obstacle and its occupancy state in the dynamic navigation grid is updated in real time.
- 6. The AR navigation-fused three-dimensional engine positioning and navigation method according to claim 1, wherein in step S2, the scene semantic information includes static obstacles and dynamic obstacles.
- 7. The three-dimensional engine positioning and navigation method of fusion AR navigation according to claim 1, characterized in that step S3 comprises the following sub-steps: Step S31, by the above An algorithm for searching a global optimal path from a current position to a target position in the dynamic navigation grid; Step S32, performing smoothing treatment on the global optimal path, and optimizing by combining with human kinematic features to generate a curve path; Step S33, when the dynamic digital twin environment model detects that the new obstacle blocks the current path, if only the local path is affected, the path segment is adjusted through the local planning algorithm, and if the global path fails, the method is restarted The algorithm performs global path search, and generates path transition animation through interpolation calculation.
- 8. The method for positioning and navigating a three-dimensional engine in combination with AR navigation according to claim 7, wherein step S4 comprises the sub-steps of: S41, performing depth test and template buffering by using depth information provided by the environment sparse three-dimensional point cloud and the curve path, so that the virtual guide element is correctly shielded by a real object; Step S42, driving the virtual guide element through the physical engine, and triggering to wait for animation or bypass animation when the path is temporarily blocked by the dynamic obstacle; S43, generating a virtual road sign through ground projection optical flow, target superposition highlight ring and turning, and automatically adapting to environment illumination to adjust brightness and color; Step S45, outputting a voice prompt or an environmental sound effect with sense of direction based on a spatial audio technology; and S45, transmitting navigation information through the equipment vibration module by using vibration signals with different frequencies and intensities.
- 9. An electronic device, comprising: at least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the three-dimensional engine localization and navigation method of fused AR navigation as claimed in any one of claims 1 to 8.
- 10. A computer readable storage medium storing computer executable instructions, comprising a data storage area storing created data and a program storage area storing a computer program, wherein the computer program when executed by a processor implements the three-dimensional engine localization and navigation method of fusion AR navigation according to any one of claims 1 to 8.
Description
AR navigation-fused three-dimensional engine positioning and navigation method, device and medium Technical Field The invention relates to the technical field of AR navigation and real-time positioning, in particular to a three-dimensional engine positioning and navigation method, equipment and medium integrating AR navigation. Background In the fields of AR navigation and real-time positioning, along with the improvement of indoor and outdoor seamless navigation and virtual-real fusion interaction requirements, the technical scheme of fusing a three-dimensional scene and a positioning navigation algorithm becomes a research hotspot, but the prior art still has a plurality of core defects: 1. Scene, positioning and rendering synergetic defects that a three-dimensional engine, a positioning algorithm and AR rendering are mostly unidirectional data transmission and a dynamic feedback and depth coupling mechanism is lacked. The positioning algorithm does not fully utilize scene priori information of the three-dimensional engine to cause positioning drift of a weak texture area, positioning results are not in back feeding rendering optimization, resource waste is caused, and problems such as penetration, suspension and the like are easy to occur due to the fact that AR mark superposition is not combined with scene geometric constraint. 2. The positioning accuracy and stability are insufficient, namely the positioning is only singly dependent on GPS, and the positioning accuracy is extremely low when the signal is weak in indoor, urban canyon or multilayer overpass environments. While visual SLAM techniques complement indoor positioning capabilities, they suffer from cumulative drift problems. The long-time operation or in the environment of the feature missing can cause the gradual increase of positioning errors, and the requirements of centimeter-level AR registration cannot be met, so that the virtual guide model deviates from the real world position. 3. The environment understanding capability is weak, most existing AR navigation applications only superimpose virtual paths on top of camera pictures, lacking a deep semantic understanding of the physical environment. They cannot recognize and avoid dynamic obstacles in real time, nor understand the topology of the scene. This results in that its planned path may be "theoretically shortest" but "practically impossible" and the user needs to bypass himself, disabling the AR guideline. 4. Virtual-real fusion experience breaks-virtual guide elements are typically simply overlaid on the video stream, lacking physical interaction with the real world. The method is characterized in that a blocking relation is wrong, a virtual path possibly penetrates through a wall or floats in the air instead of being correctly blocked by a real object, the immersion sense and the credibility are seriously damaged, physical feedback is lacking, virtual guidance cannot reasonably interact with a dynamic environment (such as a virtual arrow cannot pass through a moving pedestrian), a guiding mode is hard, a traditional arrow and buoy guiding mode is unnatural, the capability of a three-dimensional engine cannot be fully utilized to provide more visual immersion guiding (such as projecting an optical flow on the ground, highlighting on a target object and the like), and thus the immersion sense of a user is poor. 5. The real-time performance and the low power consumption are difficult to be combined, the high-precision visual SLAM positioning, the three-dimensional environment reconstruction and the rendering consume very much calculation resources, and a great challenge is provided for the battery endurance and the heat dissipation of the mobile equipment. It is difficult to guarantee the real-time performance (more than or equal to 30 fps) and low power consumption of the algorithm under the condition of limited hardware resources. In summary, the prior art schemes are mostly simple series connection of positioning, planning and rendering modules, and are not depth fusion. The information among the modules is isolated, and a unified world model is lacked to share and understand the change of the environment, so that a system has a remarkable short board in terms of accuracy, robustness and user experience. Disclosure of Invention In order to solve the problems that in the existing AR navigation technology and three-dimensional engine fusion application, due to the technical bottlenecks of scene, positioning, rendering collaborative deletion, insufficient positioning precision and stability, weak environment understanding capability, cracking of virtual-real fusion experience, difficulty in considering real-time performance and low power consumption and the like, obvious short plates exist in the aspects of navigation positioning precision, robustness and user experience, the invention provides the three-dimensional engine positioning and navigation method, equipment and medium for fu