Search

KR-20260063905-A - DEVICE AND METHOD FOR RENDERING AUGMENTED REALITY VIDEO BASED ON ARTIFICIAL INTELLIGENCE

KR20260063905AKR 20260063905 AKR20260063905 AKR 20260063905AKR-20260063905-A

Abstract

The present disclosure relates to an artificial intelligence-based augmented reality image rendering device and a method thereof. The method comprises the steps of: receiving augmented reality image data including color and depth data from a server; inputting the color and depth data into an artificial intelligence model to obtain the position where each pixel is to be displayed in real space as an output value; and displaying the virtual image data in real space based on the obtained position.

Inventors

  • 정덕영

Assignees

  • 클릭트 주식회사

Dates

Publication Date
20260507
Application Date
20241031

Claims (5)

  1. In a method for an AI-based augmented reality image rendering device to provide an augmented reality image, A step of receiving augmented reality image data including color and depth data from a server; A step of inputting the color and depth data into an artificial intelligence model provided therein to obtain the position where each pixel should be displayed in real space as the output value; and A method for providing an AI-based augmented reality image, comprising the step of displaying the virtual image data in real space based on the acquired location.
  2. In paragraph 1, The above depth data is included in a separate channel distinct from the color data channel for each pixel, and The depth data and the color data are transmitted in synchronization, and The above virtual image data is a 2D image stored for each pixel by acquiring depth data at each point when shooting or generating, an AI-based augmented reality image provision method.
  3. In paragraph 2, The above position acquisition step is, A step of determining a specific depth as a transparency control standard; and A step of determining whether to perform transparent processing by distinguishing a depth range based on the above transparency control criteria; wherein The above transparency control standard is to set the boundary line of the content to be displayed on the screen, an AI-based augmented reality video provision method.
  4. In paragraph 3, The above depth range is, A method for providing an AI-based augmented reality image, wherein multiple depths are set as transparency control standards and multiple regions are divided based on the multiple depths.
  5. In paragraph 1, The above virtual image data further includes acquired position data and image direction data, and The above position acquisition step is, A step of comparing the acquired current location data with the acquired location data, and comparing the acquired playback direction data with the video direction data; A method for providing an AI-based augmented reality image, comprising the step of adjusting the position of pixels within the virtual image data based on a comparison result.

Description

Device and Method for Rendering Augmented Reality Video Based on Artificial Intelligence The present disclosure relates to an artificial intelligence-based augmented reality image rendering device and a method thereof. Augmented reality is a technology that superimposes virtual objects, such as text or graphics, onto the real world to display them as a single image. Until the mid-2000s, augmented reality technology was in the research and development and experimental application stages, but with the recent establishment of the technological environment, it has entered the commercialization stage. In particular, augmented reality technology has begun to attract attention with the recent emergence of smartphones and the development of internet technology. The most common method for implementing augmented reality is to capture the real world using a smartphone camera and overlay pre-generated computer graphics onto it, thereby making the user feel as though the virtual and the real are intermingled. Since this method allows users to easily acquire images of the real world using a smartphone camera and computer graphics can be easily implemented through the smartphone's computing capabilities, most augmented reality applications are implemented on smartphones. Furthermore, with the recent emergence of glasses-type wearable devices, interest in augmented reality technology is increasing. FIG. 1 is a structural diagram of an augmented reality image system according to an embodiment of the present invention. FIG. 2 is a flowchart of a method for a client to provide an augmented reality image using depth data according to an embodiment of the present invention. FIG. 3 is a flowchart of a process for adjusting the transparency of virtual image data based on depth data according to an embodiment of the present invention. FIG. 4 is an example drawing showing the setting of transparency control criteria based on depth data and the division of the area according to an embodiment of the present invention. FIG. 5 is an example drawing in which area designation data is entered according to an embodiment of the present invention. FIG. 6 is a flowchart of a process for adjusting the spatial position of each pixel according to an embodiment of the present invention. FIG. 7 is an example drawing of generating a missing second frame based on color data and depth data of a first frame according to an embodiment of the present invention. FIG. 8 is a flowchart of a method in which a server generates and provides virtual image data including depth data according to an embodiment of the present invention. FIG. 9 is an exemplary drawing showing the process of adding a 1.5 frame between a first frame and a second frame according to an embodiment of the present invention. Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The advantages and features of the present invention and the methods for achieving them will become clear by referring to the embodiments described below in detail together with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below but can be implemented in various different forms. These embodiments are provided merely to ensure that the disclosure of the present invention is complete and to fully inform those skilled in the art of the scope of the invention, and the present invention is defined only by the scope of the claims. Throughout the specification, the same reference numerals refer to the same components. Unless otherwise defined, all terms used in this specification (including technical and scientific terms) may be used in a meaning that is commonly understood by those skilled in the art to which the present invention pertains. Additionally, terms defined in commonly used dictionaries are not to be interpreted ideally or excessively unless explicitly and specifically defined otherwise. The terms used herein are for describing the embodiments and are not intended to limit the invention. In this specification, the singular form includes the plural form unless specifically stated otherwise in the text. As used herein, "comprises" and/or "comprising" do not exclude the presence or addition of one or more other components in addition to the components mentioned. In this specification, "virtual image data" refers to image data produced to implement virtual reality or augmented reality. "Virtual image data" may be generated by capturing a real space through a camera, or it may be produced through a modeling process. In this specification, 'first virtual image data' refers to virtual image data provided from a server to a client at a first time point. In this specification, 'second virtual image data' refers to virtual image data provided from a server to a client at a second time point (i.e., a time point after a unit time, which is the image reception cycle, has elapsed from the f