Search

KR-20260063402-A - Real-time illumination distortion correction method for images using spatial memory, and real-time image processing apparatus applied to the same

KR20260063402AKR 20260063402 AKR20260063402 AKR 20260063402AKR-20260063402-A

Abstract

The present invention provides a method for real-time illumination distortion correction of an image using spatial memory, and a real-time image processing device applied thereto. The method for real-time illumination distortion correction of an image using spatial memory according to the present invention comprises, in a real-time image processing device, a step of collecting a local image captured of a part of real space and attitude parameter information, which is information regarding the 3D spatial estimation of the captured image; a step of creating and storing a global space corresponding to the real space while generating a 3D virtual space based on the collection of the local image, and extracting an expected space based on the global space through the attitude parameter information; and a step of calculating the difference between the illumination of the expected space and the illumination of the local image, and correcting the illumination of the local image using the calculated illumination difference.

Inventors

  • 김윤상
  • 김수영
  • 류소현

Assignees

  • 한국기술교육대학교 산학협력단

Dates

Publication Date
20260507
Application Date
20241030

Claims (13)

  1. A step of collecting a local image captured in a real-time image processing device and attitude parameter information, which is information about the 3D space estimation captured therein; A step of generating and storing a global space corresponding to the real space while generating a 3D virtual space based on the collection of the above-mentioned regional images, and extracting an expected space based on the above-mentioned global space through the attitude parameter information; and A method for real-time illumination distortion correction of an image using spatial memory, comprising the step of calculating the difference between the illumination of the expected space and the illumination of the local image, and correcting the illumination of the local image using the calculated illumination difference.
  2. In Article 1, A method for real-time illumination distortion correction of an image using spatial memory, characterized in that the above-mentioned local image is a two-dimensional or three-dimensional image, is a video or a series of consecutive static images, and is one image frame among the video or one static image among the consecutive static images.
  3. In Article 1, A method for real-time illumination distortion correction of an image using spatial memory, further comprising the step of executing a visual information-based attitude estimation algorithm to estimate attitude parameter information from a local image when there is no function to provide attitude parameter information in the real-time image processing device.
  4. In Paragraph 3, A method for real-time illumination distortion correction of an image using spatial memory, characterized in that the posture parameter information is a posture measurement value based on a posture measurement sensor of the real-time image processing device, or a posture estimation value based on a posture estimation algorithm based on visual information, or reflects the posture measurement value and the posture estimation value.
  5. In Article 1, A method for real-time illumination distortion correction of an image using spatial memory, comprising, as a preliminary step for generating the three-dimensional virtual space based on the collection of the above-mentioned region image, a step of distinguishing whether the region image is of the type of two-dimensional image or three-dimensional image.
  6. In Article 5, A method for real-time illumination distortion correction of an image using spatial memory, further comprising the step of converting the above-mentioned local image into 3D image data corresponding to the type of the above-mentioned 3D image when the above-mentioned local image is the above-mentioned 2D image.
  7. In Article 1, A step of determining whether there is alignment between the local image and the global space based on a predefined time variable value when the generation of the global space is completed; and A method for real-time illumination distortion correction of an image using spatial memory, further comprising the step of updating the global space by reflecting the local image in the global space if the local image is an image that remains in the global space for a longer period than the time variable value.
  8. In Article 7, When aligning the local image to the global space, if the illuminance difference between the local image and the global space exceeds a predetermined threshold, the illuminance of the adjacent space between the local image and the global space is treated as inappropriate; and The illuminance of the aforementioned adjacent space is obtained through the following Equation 1 <Formula 1> Illuminance of the tangent space = (Color of global space × ω) + (Color of local space × (1-ω) (Here, ω: predetermined global space weight) A real-time illumination distortion correction method for an image using spatial memory that includes an additional correction step.
  9. A collection unit that collects a local image captured of a part of real space and attitude parameter information, which is information about the 3D space estimation captured therefrom; A memory unit that generates and stores a global space corresponding to the real space while generating a three-dimensional virtual space based on the collection of the above-mentioned regional images, and extracts an expected space based on the above-mentioned global space through the attitude parameter information; and A real-time image processing device comprising a correction unit that calculates the difference between the illuminance of the expected space and the illuminance of the local image, and corrects the illuminance of the local image using the calculated illuminance difference.
  10. In Article 9, The above collection unit is an image collection unit that collects the above-mentioned regional images; A posture collection unit that collects the posture parameter information as a posture measurement value based on an on-device type or additionally connected posture measurement sensor, or a posture estimation value based on a visual information-based posture estimation algorithm for estimating the posture parameter information from the local image, or as a reflection of the posture measurement value and the posture estimation value; and A real-time image processing device comprising an information transmission unit that processes the above-mentioned region image and the above-mentioned attitude parameter information by transmitting them to the above-mentioned memory unit.
  11. In Article 9, The above memory unit is an information receiving unit that receives the above-mentioned region image and the above-mentioned attitude parameter information; A space forming unit that generates the three-dimensional virtual space based on the collection of the above-mentioned regional images; A spatial memory unit that initially applies the above-mentioned three-dimensional virtual space, or creates, stores, and updates the above-mentioned global space through alignment with the above-mentioned three-dimensional virtual space; A space partitioning unit for extracting the above-mentioned expected space; and A real-time image processing device comprising a spatial transmission unit that processes the above-mentioned regional image and the above-mentioned expected space by transmitting them to the above-mentioned correction unit.
  12. In Article 9, The above correction unit is a spatial receiving unit that receives the above regional image and the above expected space; An illuminance difference calculation unit that calculates the difference between the illuminance of the expected space and the illuminance of the local image by classifying it into 2D or 3D types; An image correction unit that corrects the overall illuminance of the region image using the illuminance difference calculated above, or differentially corrects the illuminance for each part of the region image; and A real-time image processing device including an image transmission unit that processes and transmits a corrected regional image to be displayed on a display screen.
  13. Memory; and It includes at least one processor, and A real-time image processing device comprising: a processor that collects a local image captured of a part of real space and attitude parameter information, which is information regarding the 3D space estimation captured therefrom, and then transmits and processes the local image and the attitude parameter information; in response thereto, a global space corresponding to the real space is created and stored while generating a 3D virtual space based on the collection of the local image, an expected space is extracted based on the global space through the attitude parameter information, and a local image with the illumination of the local image corrected using the result of calculating the difference between the illumination of the expected space and the illumination of the local image, and outputs the corrected local image to a display screen.

Description

Real-time illumination distortion correction method for images using spatial memory, and real-time image processing apparatus applied to the same The present invention relates to a method for correcting real-time illumination distortion of an image using spatial memory and a real-time image processing device applied thereto, and more specifically, to a method for correcting real-time illumination distortion of an image using spatial memory related to consistently correcting the illumination of a two-dimensional or three-dimensional image and a real-time image processing device applied thereto. Typically, digital imaging of real space is determined by various factors, but the primary factors are the reflectivity of the real objects and information regarding light sources, and in most cases, the quality of the digital image is determined by these. It is virtually impossible for image sensors based on conventional technology to accurately measure the reflectance of real objects and information regarding light sources; therefore, many companies and research institutions today apply technologies (e.g., image filters) to correct digital images to be closer to real space based on empirical or learned results. With recent technological advancements, technologies for applying the aforementioned digital image correction techniques to digital devices that digitally image real spaces in real time, such as HMDs (Head Mounted Displays), are continuously being developed and emerging. However, there are still limitations in accurately acquiring information regarding the reflectivity and light sources of real objects. Furthermore, these limitations become more clearly evident in dynamic environments where images are continuously collected and output. The biggest problem is that the illuminance of images collected and output in dynamic environments is distorted, which is recognized as a major issue that degrades the quality of digital images based on dynamic environments. Such illumination distortion is a sensitive issue for devices that collect real-time video and render it to the user's vision, such as mixed reality HMDs (Head Mounted Displays), extended reality HMDs (Head Mounted Displays), and augmented reality devices. Since the aforementioned rendering devices expose digital images to the user's vision at high frequencies, causing the user to experience the distortion of image illumination, a prompt solution is required as this results in optical illusions, distortion of reality, motion sickness, and fatigue. Nevertheless, conventional technology employs methods that directly adjust the brightness of the display device or correct image illumination using only limited information around the image sensor; therefore, there are structural limitations in consistently correcting image illumination in dynamic environments where image acquisition and output are performed continuously. Therefore, a method is needed to consistently correct image illumination in a continuous dynamic environment. FIG. 1 is a configuration diagram showing a real-time image processing device according to one embodiment of the present invention. FIG. 2 is a configuration diagram showing a real-time image processing device according to another embodiment of the present invention. FIG. 3 is an exemplary diagram illustrating the process of collecting local image and attitude parameter information according to one embodiment of the present invention. FIG. 4 is an exemplary diagram illustrating the alignment process between a local image and a global space according to one embodiment of the present invention. FIG. 5 is an exemplary diagram illustrating the process of extracting an expected space according to one embodiment of the present invention. FIG. 6 is an exemplary diagram illustrating an image correction process according to one embodiment of the present invention. And, FIG. 7 is a flowchart illustrating a method for real-time illumination distortion correction of an image using spatial memory according to an embodiment of the present invention. Hereinafter, embodiments will be described in detail with reference to the attached drawings. However, the scope of the patent application is not limited or restricted by these embodiments. Identical reference numerals in each drawing indicate identical components. Various modifications may be made to the embodiments described below. The embodiments described below are not intended to limit the forms of practice and should be understood to include all modifications, equivalents, and substitutions thereof. Terms such as "first" or "second" may be used to describe various components, but these terms should be understood solely for the purpose of distinguishing one component from another. For example, a first component may be named a second component, and similarly, a second component may be named a first component. The terms used in the embodiments are used merely to describe specific embodiments and are not