Search

KR-102964598-B1 - METHOD FOR CONVERGING 3D OBJECTS AND ANNOTATION META INFORMATION BASED ON MIXED REALITY

KR102964598B1KR 102964598 B1KR102964598 B1KR 102964598B1KR-102964598-B1

Abstract

A method for fusing 3D objects and annotation meta-information based on mixed reality is disclosed. The method for fusing 3D objects and annotation meta-information performed by a fusion system may include the step of downloading 3D object data, manual information, and annotation meta-information related to a mixed reality request from 3D object data, manual information, and annotation meta-information, which are respectively stored in the form of files on a cloud server; the step of fusing an annotation generated using the downloaded annotation meta-information with the downloaded 3D object data; and the step of augmenting the 3D object data and the generated annotation in a mixed reality environment through a digital twin.

Inventors

  • 조근식
  • 송우현
  • 유영훈

Assignees

  • 주식회사 증강지능

Dates

Publication Date
20260513
Application Date
20240214

Claims (5)

  1. In a method for fusing a 3D object and annotation meta-information performed by a fusion system, A step of downloading 3D object data, manual information, and annotation metadata related to a mixed reality request from 3D object data, manual information, and annotation metadata stored respectively in file form on a cloud server, based on the location of the files stored on the cloud server; A step of fusing an annotation generated using the downloaded annotation metadata with the downloaded 3D object data; and a step of augmenting the 3D object data and the generated annotation in a mixed reality environment through a digital twin. Includes, The above download step is, From the 3D object data, manual information, and annotation metadata information stored respectively in file form on the aforementioned cloud server, upon performing an authentication process, the 3D object data and annotation metadata information related to the mixed reality request are downloaded from the private directory of the cloud server based on the location of the files stored on the cloud server, and the manual information related to the mixed reality request is downloaded from the public directory of the cloud server based on the location of the files stored on the cloud server, and A step of replacing some of the files for which changes have been requested among the 3D object data, manual information, and annotation metadata, which are respectively stored in file form on the aforementioned cloud server. A method for fusing a 3D object including and annotation meta-information.
  2. In paragraph 1, The above cloud server is divided into a public directory and a private directory, and The above public directory is configured to allow access to files or objects using location information regardless of authentication, and stores an index file that allows selecting manual information, and A method for fusing 3D objects and annotation meta-information, characterized in that the above-mentioned private directory is configured to allow access to files and objects through an authentication process, and stores 3D object data, manual information, and annotation meta-information.
  3. In paragraph 1, The above-mentioned fusing step is, A step of storing the downloaded 3D object data and the downloaded annotation metadata, positioning the stored 3D object data at designated coordinate data in a mixed reality space, positioning the generated annotation based on the 3D spatial coordinates included in the stored annotation metadata, assigning an annotation name based on the annotation name data included in the annotation metadata, specifying and connecting the parts of the pointed 3D object data based on the target object data to be annotated included in the annotation metadata, and outputting manual information provided by the cloud server through the URL of the generated annotation. Includes, A method for fusing a 3D object and annotation meta-information, characterized in that the cloud server provides manual information by moving to the location information of manual information stored in the private directory of the cloud server through an authentication process as a manual related to the current task is selected based on an index file stored in a public directory.
  4. In paragraph 1, A method for fusing a 3D object and annotation meta-information, characterized in that an initial annotation is automatically generated using 3D object data and manual information on a device, the 3D object data, manual information, and annotation meta-information are uploaded to a cloud server, and the 3D object data, manual information, and annotation meta-information uploaded to the cloud server are each stored in the form of files.
  5. delete

Description

Method for Converging 3D Objects and Annotation Meta-information Based on Mixed Reality The following description concerns a technology that fuses 3D objects and annotation meta-information. In industrial sectors, changes to manuals and models occur frequently. In such cases, the corresponding data residing in the cloud must be modified to apply these changes and output them to users. Currently, the data in the cloud takes the form of Addressable Assets. Addressable Assets are digital assets, such as objects and manuals, that are assigned addresses to make them accessible from the outside, and they can contain various types of data, including manuals, 3D objects, and instructions. When such diverse data is generated in the form of Addressable Assets, modifying each piece of data (manuals, models, instructions, etc.) contained within the asset requires updating the existing data with the changes and regenerating the Addressable Asset. This leads to an inefficient problem where even minor changes, such as modifying a portion of the text in a manual, necessitate the regeneration of the Addressable Asset containing the entire data. Furthermore, when the regenerated Addressable Asset is uploaded to the cloud, existing programs recognize it as a different file because the download target changes. As a result, a situation arises where users cannot print the changed manual and model, and they must go through the process of reinstalling the program to resolve this problem. In the industrial sector, model data is modified when the subject of maintenance changes or when the shape of the parts constituting the model changes. First, if model data is modified due to a change in the subject of maintenance, all related data (manuals, annotations) must also be updated because the work content changes. However, second, in cases such as changes to the shape or location of parts constituting the model, only a portion needs to be replaced rather than the entire component. Consequently, when changes occur to the parts constituting the model, additional work is required to locate and replace the specific parts that need modification among the entire set, as the model data existing in the Prefab or Scene includes all parts related to the entire operation. In this case, since annotations and 3D object data must be managed within a single Prefab or Scene, additional work must be performed to modify the corresponding annotation data whenever a part change is necessary. In addition, 3D objects converted into the form of addressable assets for use are large in size because they include parts corresponding to the entire task, and the addressable assets themselves are also large in size because they contain data such as manuals and instructions. If the scale of the task is large, downloading addressable assets from the program increases the waiting time for the download. Published Patent Application No. 10-2021-0098822 (August 11, 2021) FIG. 1 is a diagram illustrating the operation of fusing three-dimensional object data and annotation meta-information in one embodiment. FIG. 2 is a block diagram illustrating the configuration of a fusion system in one embodiment. FIG. 3 is a flowchart illustrating a method for fusing three-dimensional object data and annotation meta-information in one embodiment. FIG. 4 is a diagram illustrating the storage structure of a cloud server in one embodiment. FIG. 5 is an example illustrating the operation of fusing three-dimensional object data and annotation meta-information in one embodiment. Hereinafter, embodiments will be described in detail with reference to the attached drawings. In the embodiments, the fusion operation of 3D object data and annotation meta-information based on mixed reality will be described. Here, Mixed Reality refers to a technology that integrates virtual reality into the real world, allowing physical objects in the real world and virtual objects to interact. Mixed Reality encompasses the meanings of Augmented Reality (AR), which adds virtual information based on reality, and Augmented Virtuality (AV), which adds real-world information to a virtual environment. In other words, by providing a smart environment where reality and virtuality are naturally connected, users can experience a rich experience. FIG. 1 is a diagram illustrating the operation of fusing three-dimensional object data and annotation meta-information in one embodiment. A device (110), a cloud server (120), and a mixed reality device (130) may be configured to fuse three-dimensional object data and annotation meta-information. The device (110) may include three-dimensional object data and manual information. The device (110) refers to a device of various forms, such as a PC or mobile, that has three-dimensional object data and manual information to be uploaded to the cloud. Here, the manual information refers to a paper manual used in industrial sites that has been digitized, and may include content related to in