EP-3987381-B1 - METHOD FOR GENERATING A VIRTUAL REPRESENTATION OF A REAL ENVIRONMENT, DEVICES AND CORRESPONDING SYSTEM
Inventors
- LEDUNOIS, Valérie
- JOUIN, Maxime
- FLOUTIER, CHRISTOPHE
Dates
- Publication Date
- 20260513
- Application Date
- 20200610
Claims (12)
- Method for generating a virtual representation in at least 2 dimensions, of a real environment, the generation being implemented by a mixed virtual reality headset intended to be worn by a user, the mixed virtual reality headset being associated with at least one interface device, the generating method comprising: - acquiring relative coordinates in the real environment, corresponding to a position of said interface device in the real environment, - following an interaction (E22) of the user with said interface device, determining (E23) depending on said relative coordinates of the position of said interface device in the real environment a corresponding point in the virtual representation, - generating (E25) said virtual representation based at least on the point associated with the relative coordinates of the acquired position of said interface device.
- Generating method according to Claim 1, wherein said relative coordinates are defined with respect to a reference position of the real environment, said reference position corresponding to an initial position of the virtual reality headset, which initial position is associated with an origin point of a reference frame of said virtual representation.
- Generating method according to either one of Claims 1 and 2, further comprising, when at least two points of the virtual representation are successively associated with at least two acquired positions of said interface device in the real environment, generating a virtual element in the virtual representation, the size of said virtual element generated in the virtual representation being scaled with respect to the distance between the at least two acquired positions in the real environment.
- Generating method according to Claim 3, wherein a type of said virtual element is previously selected from a library of types of virtual element.
- Generating method according to Claim 4, wherein, when the virtual element is a plane element, the library of types of virtual element contains at least any of the following types: wall, window, door, floor, ceiling, light, electrical socket, radiator, path, lawn, hedge, tree.
- Generating method according to any one of Claims 1 to 5, further comprising displaying said virtual representation via the mixed virtual reality headset.
- Generating method according to Claim 6, wherein, when the virtual representation is a 3-dimensional representation, said virtual representation is displayed in superposition with the real environment viewed by the user via the mixed virtual reality headset.
- Mixed virtual reality headset capable of being connected to an interface device, said headset comprising: - a detector of the position of the interface device in a real environment in which said headset is placed, the detector acquiring relative coordinates of a position of said interface device in the real environment, - a processor configured to determine, following receipt of said user interaction signal from said interface device, depending on the relative coordinates, a corresponding point in a virtual representation in at least 2 dimensions of said real environment, - a generator of said virtual representation based at least on the point associated with the relative coordinates of the acquired position of said interface device.
- Interface device capable of being connected to a mixed virtual reality headset and comprising: - a transmitter making exchanges with a locator of the interface device in a real environment in which said headset is placed, the locator acquiring relative coordinates of a position of said interface device in the real environment, - a detector of user interaction with said interface device, and - a transmitter of a user interaction signal following said detected user interaction, the interaction signal being configured to trigger generation of a virtual representation in at least 2 dimensions of a real environment based at least on the point associated with the relative coordinates of the acquired position of said interface device.
- System for generating a virtual representation in at least 2 dimensions, of a real environment, comprising: - a mixed virtual reality headset according to Claim 8 and at least one interface device according to Claim 9.
- Computer program comprising instructions for implementing the method for generating a virtual representation in at least 2 dimensions of a real environment according to any one of Claims 1 to 7, when said program is executed by the processor of a mixed virtual reality headset according to Claim 8.
- Computer-readable medium comprising a computer program according to Claim 11.
Description
1. Scope of the invention The invention relates to the virtual representation of a real environment, such as a 2D plan or a 3D representation, and more particularly to the generation of a virtual representation of the real environment from a mixed virtual reality headset. 2. Prior Art To generate a 3D representation of a real space, professionals generally use a 3D scanner to scan the room or space to be modeled. However, using such a device requires calibration and precise positioning within the space to be mapped for accurate measurements. Furthermore, the model obtained from data measured by a 3D scanner is not always optimal, especially when the space to be modeled contains obstacles, such as in a furnished room. The model then requires manual adjustments, which can be tedious and demand a certain level of expertise with CAD (Computer-Aided Design) tools. In addition, the cost of a 3D scanner is quite high. Another method for generating a 3D representation of a real space is to use an existing 2D plan, either manually generated or obtained from a file. However, this method requires specialized skills in using modeling software. Furthermore, architectural 2D plans, when available, are sometimes slightly inaccurate compared to the actual construction. Therefore, there is a need to improve the state of the art. US 2019043259 A1 discloses the estimation of a safety zone in a VR environment; the modification of vertices of this zone via a controller and the display of warnings/collisions in the VR experience. 3. Description of the invention It relates to a method of generating a virtual representation in at least 2 dimensions of a real environment, the generation being implemented by a mixed virtual reality headset intended to be worn by a user, the mixed virtual reality headset being associated with at least one interface device. Advantageously, according to the invention, the generation process comprises: an acquisition of relative coordinates in the real environment, corresponding to a position of said interface device in the real environment, following user interaction with said interface device, a determination, based on said relative coordinates, of the position of said interface device in the real environment of a corresponding point in the virtual representation, a generation of said virtual representation from at least the point associated with the relative coordinates of the acquired position of said interface device. The invention thus makes it possible to generate a virtual representation of a real-world environment to scale, based on measurements taken in that real-world environment. The measurement process is facilitated by the advantageous use of a mixed-reality virtual reality headset combined with an interface device that the user manipulates and positions at the locations where they wish to take measurements for modeling the real-world environment. By mixed virtual reality headset, we mean a virtual reality headset adapted to view both the real environment and the generated virtual representation. Using such a helmet for the generation process according to the invention offers the advantage that, during measurements, the user can see where they are placing the interface device since they can see through the helmet. Furthermore, this type of helmet operates autonomously in that it does not require the installation of sensors to determine the position of the user wearing the helmet in real space or the position of the interface device. Thus, putting on the helmet for measurements is simple. The invention simplifies the acquisition of real-world data for modeling the real environment by using readily available, user-friendly equipment at a reasonable price, thus making it accessible to a wider range of users. In particular, the generation method according to the invention does not require expertise in modeling software or measurement techniques. It also reduces the risk of errors in surveying the real-world environment. According to a particular embodiment of the invention, the relative coordinates are defined with respect to a reference position of the real environment, the reference position corresponding to an initial position of the virtual reality headset associated with an origin point of a reference frame of the virtual representation. According to another particular embodiment of the invention, when at least two points of the virtual representation are successively associated with at least two positions acquired by said interface device in the real environment, a virtual element is generated in the virtual representation. Advantageously, the size of the virtual element generated in the virtual representation is scaled with respect to the distance between the two positions acquired successively in the real environment. According to this particular embodiment of the invention, adding a virtual element to scale relative to the real environment within the virtual representation is facilit