CN-113850920-B - Virtual-real fusion method, system, device and storage medium based on space positioning
Abstract
The invention discloses a virtual-real fusion method, a system, a device and a storage medium based on space positioning, wherein the method comprises the steps of determining a plurality of space positioning base points of a real scene, and determining the relative position relation between each space positioning base point and a real object to be interacted; the method comprises the steps of constructing a first coordinate system of a virtual digital space, determining first coordinates of each spatial positioning base point in the first coordinate system, modeling a real scene according to the first coordinate system to obtain a virtual digital model, determining a model area to be fused in the virtual digital model according to a relative position relation and the first coordinates, and carrying out virtual-real fusion on the model area to be fused and a real object to be interacted according to the spatial positioning base points. The invention reduces the times and the process of comparing and identifying the real object and the digital model, improves the efficiency of mixed reality display and interaction, ensures the real-time performance and the readiness of virtual-real fusion, greatly improves the interactive experience of users, and can be widely applied to the technical field of mixed reality.
Inventors
- HUANG WEN
- DING YAN
- ZHOU XIAOJUN
- LIANG SIYAN
- ZHOU PAN
- PENG RONGRONG
- ZENG ZHEN
- Gao Sikai
Assignees
- 深圳市中孚恒升科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20210910
Claims (6)
- 1. The virtual-real fusion method based on space positioning is characterized by comprising the following steps of: Determining a plurality of spatial positioning base points of a real scene, and determining the relative position relation between each spatial positioning base point and a real object to be interacted, wherein the method specifically comprises the following steps: Acquiring first image data of a real scene by a head-mounted MR device; Determining a plurality of spatial positioning base points according to the first image data, and determining a real object to be interacted; The real object to be interacted is a better anchored real object in a real scene, and three-dimensional information is acquired based on a trigonometry principle; Determining the relative position relation according to the spatial position coordinates of the spatial positioning base point and the real object to be interacted; Making a coincidence parameter calibration mechanism to dynamically correct the relative position relation, adjusting through the fitting deviation degree, finding out the correction value of the optimal operation, and finally debugging and completing correction calculation in a live-action scene; calculating relative coordinate system data according to real objects corresponding to the interaction points, calculating space positioning coordinates of a series of interaction points through a virtual-real fit calibration mechanism, converting the determined space coordinate system into a camera coordinate system in a Unity environment, and carrying out virtual-real fit display on a digital model and the real objects; the space positioning base point and the space position coordinates of the real object to be interacted are obtained through a binocular vision ranging algorithm; The head-mounted MR device is internally provided with an image capturing device, and the image capturing device is a camera or an infrared image capturing device; constructing a first coordinate system of a virtual digital space, and determining a first coordinate of each spatial positioning base point in the first coordinate system; modeling the real scene according to the first coordinate system to obtain a virtual digital model, and determining a model area to be fused in the virtual digital model according to the relative position relation and the first coordinate, wherein the method specifically comprises the following steps: Determining a second coordinate of the object to be interacted in the first coordinate system according to the relative position relation and the first coordinate; determining a model area to be fused according to the second coordinates and the virtual digital model; carrying out virtual-real fusion on the model area to be fused and the real object to be interacted according to the space positioning base point; the 1:1 model obtained after modeling is converted into a transparent 3d line manuscript, and the 3d line manuscript is overlapped on the influence of a real object through the adjustment of the head gesture of a user during operation, so that the efficiency of mixed reality display and interaction is improved, and the requirement on system calculation force is reduced.
- 2. The virtual-real fusion method according to claim 1, wherein the step of constructing a first coordinate system of a virtual digital space and determining a first coordinate of each spatial location base point in the first coordinate system specifically comprises: Constructing a first coordinate system of a virtual digital space by taking the head-mounted MR equipment as an origin; And determining a first coordinate of the spatial positioning base point in the first coordinate system according to the first image data.
- 3. The virtual-real fusion method based on space positioning according to claim 1, wherein the step of performing virtual-real fusion on the model area to be fused and the real object to be interacted according to the space positioning base point specifically comprises the following steps: Determining second image data of the object to be interacted according to the space positioning base point and the first image data; Attaching the to-be-fused model area and the second image data to obtain a virtual-real fusion image; And displaying the virtual-real fusion image through the head-mounted MR device.
- 4. A virtual-to-real fusion system based on spatial localization, comprising: the spatial positioning base point determining module is used for determining a plurality of spatial positioning base points of a real scene and determining the relative position relation between each spatial positioning base point and a real object to be interacted, and specifically comprises the following steps: Acquiring first image data of a real scene by a head-mounted MR device; Determining a plurality of spatial positioning base points according to the first image data, and determining a real object to be interacted; The real object to be interacted is a better anchored real object in a real scene, and three-dimensional information is acquired based on a trigonometry principle; Determining the relative position relation according to the spatial position coordinates of the spatial positioning base point and the real object to be interacted; The method specifically comprises the steps of calculating relative coordinate system data according to real objects corresponding to interaction points, calculating space positioning coordinates of series of interaction points through a virtual-real fit calibration mechanism, converting a determined space coordinate system into a camera coordinate system in a Unity environment, and carrying out virtual-real fit display on a digital model and the real objects; the space positioning base point and the space position coordinates of the real object to be interacted are obtained through a binocular vision ranging algorithm; The head-mounted MR device is internally provided with an image capturing device, and the image capturing device is a camera or an infrared image capturing device; The first coordinate system construction module is used for constructing a first coordinate system of a virtual digital space and determining a first coordinate of each spatial positioning base point in the first coordinate system; The model area to be fused determining module is configured to model the real scene according to the first coordinate system to obtain a virtual digital model, and further determine a model area to be fused in the virtual digital model according to the relative positional relationship and the first coordinate, and specifically includes: Determining a second coordinate of the object to be interacted in the first coordinate system according to the relative position relation and the first coordinate; determining a model area to be fused according to the second coordinates and the virtual digital model; the virtual-real fusion module is used for carrying out virtual-real fusion on the to-be-fused model area and the to-be-interacted real object according to the spatial positioning base point; the 1:1 model obtained after modeling is converted into a transparent 3d line manuscript, and the 3d line manuscript is overlapped on the influence of a real object through the adjustment of the head gesture of a user during operation, so that the efficiency of mixed reality display and interaction is improved, and the requirement on system calculation force is reduced.
- 5. The utility model provides a virtual reality fuses device based on space location which characterized in that includes: At least one processor; At least one memory for storing at least one program; The at least one program, when executed by the at least one processor, causes the at least one processor to implement a spatial localization based virtual-to-actual fusion method as claimed in any one of claims 1 to 3.
- 6. A computer readable storage medium, in which a processor executable program is stored, characterized in that the processor executable program, when being executed by a processor, is adapted to perform a spatial localization based virtual-real fusion method as claimed in any one of claims 1 to 3.
Description
Virtual-real fusion method, system, device and storage medium based on space positioning Technical Field The invention relates to the technical field of mixed reality, in particular to a virtual-real fusion method, a system, a device and a storage medium based on space positioning. Background The Mixed Reality (MR) is a further development of Virtual Reality (VR), and can utilize computer image technology, sensing technology, visual wearable equipment and other technologies and equipment to realize a visual environment in which digital virtual objects coexist with real world objects, and can enable a user to construct an interactive feedback loop of virtual and real world on the basis of normal perception of the real world, so that timely and deep interaction of the virtual world and the real world is achieved. The mixed reality can overlay the digital object to the real world or can overlay the real object in a virtual environment in a virtual manner, but the mixed reality is not simply overlaid, and the depth fusion of the virtual and the real is achieved, so that an organic unity is formed. The main current hardware supporting the MR technology mainly comprises a Microsoft holonens 2 helmet and the like, and in the process of using the MR technology helmet, the method is often applied to the process of spatial positioning of virtual model and virtual-real fit of the entity world, and the main current positioning method comprises the local spatial positioning point of the helmet, the Microsoft Azure cloud spatial point and the spatial positioning mode of VuForia Engine. The spatial anchor points represent important points that the system tracks over a period of time. Each anchor point has an adjustable coordinate system (based on other anchor points or reference frames) to ensure that the positioned hologram remains accurate. The hologram is rendered in the coordinate system of the anchor points, which can be precisely located to the hologram at any given time. This is a small adjustment cost over a period of time as the system continually returns it to a real world-based location. The cost of the adjustment comprises equipment delay of the helmet, comparison analysis of model data of physical world photo data and cloud space, and positioning point determination, wherein the delay and error of the data are caused in the middle in the final process of attaching the virtual model to a real object, the virtual model and the physical object are finally caused to be attached in error in the actual process, and delay display caused by asynchronization is caused in the moving process, so that the final customer experience is affected, and particularly in complex, multi-interaction-point and multi-model interaction experience scenes, such as an aircraft cockpit, a plurality of instrument panels, buttons, pull rods and other operations are involved, if the operations are continuously compared and the space points are confirmed through the physical object and the virtual model, and finally visual fusion display is carried out, great delay and error are brought, and the customer experience is finally affected. Disclosure of Invention The present invention aims to solve at least one of the technical problems existing in the prior art to a certain extent. Therefore, an object of the embodiment of the invention is to provide a virtual-real fusion method based on space positioning, which reduces the times and processes of comparing and identifying real objects with a digital model, greatly reduces the complexity of a system calculation process under complex scene interaction, improves the efficiency of mixed reality display and interaction, reduces the requirement on system calculation force, and avoids the delay blocking phenomenon of the attachment and display process of digital content and real objects due to the influence of network delay and equipment calculation resources, thereby ensuring the real-time performance and readiness of virtual-real fusion and greatly improving the interactive experience of users. Another object of the embodiment of the invention is to provide a virtual-real fusion system based on spatial localization. In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps: In a first aspect, an embodiment of the present invention provides a virtual-real fusion method based on spatial positioning, including the following steps: Determining a plurality of spatial positioning base points of a real scene, and determining the relative position relation between each spatial positioning base point and a real object to be interacted; constructing a first coordinate system of a virtual digital space, and determining a first coordinate of each spatial positioning base point in the first coordinate system; Modeling the real scene according to the first coordinate system to obtain a virtual digital model, and determining a model are