KR-20260067037-A - METHOD FOR PROCESSING IMAGE AND APPARATUS PERFORMING THE METHOD
Abstract
A method performed by an electronic system may include the operation of acquiring a first pixel array generated through a first lens array of an optical acquisition unit, wherein the first pixel array includes element images corresponding to each of a plurality of lenses of the first lens array, the operation of generating a second pixel array including perspective-deformed object images in which pixels at the same location of the element images of the first pixel array are rearranged, the operation of generating a third pixel array by inversely rearranging pixels included in each of the perspective-deformed object images of the second pixel array, and the operation of outputting three-dimensional image data based on the third pixel array through a display panel of an optical display unit and a second lens array.
Inventors
- 최재관
- 우성수
Assignees
- 재단법인 구미전자정보기술원
Dates
- Publication Date
- 20260512
- Application Date
- 20241105
Claims (12)
- The method performed by the electronic system is, An operation of acquiring a first pixel array generated through a first lens array of an optical acquisition unit - the first pixel array includes elemental images (EIs) corresponding to each of a plurality of lenses of the first lens array -; The operation of generating a second pixel array comprising perspective-variant object images (POIs) in which pixels at the same location of the element images of the first pixel array are rearranged; The operation of generating a third pixel array by inversely rearranging the pixels included in each of the perspective-deformed object images of the second pixel array; and Operation of outputting three-dimensional image data based on the third pixel array through the display panel of the optical display unit and the second lens array. including, method.
- In paragraph 1, A charge-coupled device (CCD) of the optical acquisition unit generates the first pixel array by acquiring light reflected from the actual scene through the first lens array. method.
- In paragraph 1, Among the perspective-deformed object images included in the second pixel array, the first perspective-deformed object image is one in which the pixels at the first position of the element images are rearranged according to the arrangement of the first lens array. method.
- In paragraph 1, The sizes of the single lenses of the first lens array and the second lens array are different. method.
- In paragraph 1, The resolution of a single lens of the second lens array corresponds to the arrangement of the first lens array, method.
- In paragraph 1, Each of the perspective-deformed object images included in the second pixel array corresponds to a single lens of the second lens array, method.
- In paragraph 1, The distance between the display panel of the optical display unit and the second lens array is equal to the focal length of the second lens array, method.
- In paragraph 1, The distance between the display panel of the optical display unit and the second lens array is shorter than the focal length of the second lens array. method.
- A computer program stored on a computer-readable recording medium in combination with hardware to execute the method of any one of claims 1 to 8.
- In electronic systems, Optical acquisition unit; and Optical display Includes, An operation of acquiring a first pixel array generated through a first lens array of the optical acquisition unit - the first pixel array includes elemental images (EIs) corresponding to each of a plurality of lenses of the first lens array -; The operation of generating a second pixel array comprising perspective-variant object images (POIs) in which pixels at the same location of the element images of the first pixel array are rearranged; The operation of generating a third pixel array by inversely rearranging the pixels included in each of the perspective-deformed object images of the second pixel array; and Operation of outputting three-dimensional image data based on the third pixel array through the display panel of the optical display unit and the second lens array. performing, Electronic system.
- In electronic devices, At least one processor including processing circuitry; and Memory comprising one or more storage media that store instructions Includes, When the above instructions are executed individually or collectively by the at least one processor, the electronic device: An operation of acquiring a first pixel array generated through a first lens array of an optical acquisition unit - the first pixel array includes elemental images (EIs) corresponding to each of a plurality of lenses of the first lens array -; The operation of generating a second pixel array comprising perspective-variant object images (POIs) in which pixels at the same location of the element images of the first pixel array are rearranged; The operation of generating a third pixel array by inversely rearranging the pixels included in each of the perspective-deformed object images of the second pixel array; and Operation of transmitting three-dimensional image data based on the above-mentioned third pixel array to an external electronic device causing to perform, Electronic device.
- In Paragraph 11, The above three-dimensional image data is output through the display panel of the optical display unit of the external electronic device and the second lens array, Electronic device.
Description
Image processing method and electronic apparatus performing the same The present disclosure relates to an image processing method and apparatus, and more specifically, to a method for restoring a three-dimensional image of a real object or scene using light field technology. Light field technology can provide three-dimensional images with minimized visual distortion by acquiring light intensity and direction information with a camera and reproducing it exactly on a display. Integral imaging technology is a passive multi-viewpoint imaging technique that can record multiple 2D images of a 3D scene from various perspectives using a lens array (or multiple cameras). By projecting light rays reflected from the actual scene onto a CCD (charge-coupled device) camera through a lens array, images from each viewpoint corresponding to the lenses of the lens array can be acquired as multiple elemental images. Based on these elemental images, it is possible to provide information from various angles depending on the direction the user is viewing. The information described above may be provided as related art for the purpose of aiding understanding of the present disclosure. No claim or determination is made as to whether any of the foregoing may be applied as prior art related to the present disclosure. Figure 1 is a diagram illustrating the depth inversion phenomenon. FIG. 2 is a drawing for explaining an electronic system according to one embodiment. FIG. 3 is a flowchart of an image processing method according to one embodiment. Figure 4 is a diagram illustrating pixel rearrangement. Figures 5 and 6 are diagrams for explaining pixel rearrangement based on a perspective-deformed object image. Specific structural or functional descriptions of the embodiments are disclosed for illustrative purposes only and may be modified and implemented in various forms. Accordingly, actual implementations are not limited to the specific embodiments disclosed, and the scope of this specification includes modifications, equivalents, or substitutions included in the technical concept described by the embodiments. Terms such as "first" or "second" may be used to describe various components, but these terms should be interpreted solely for the purpose of distinguishing one component from another. For example, the first component may be named the second component, and similarly, the second component may be named the first component. When it is stated that a component is "connected" to another component, it should be understood that it may be directly connected to or coupled with that other component, or that there may be other components in between. The singular expression includes the plural expression unless the context clearly indicates otherwise. In this specification, terms such as "comprising" or "having" are intended to specify the existence of the described features, numbers, steps, actions, components, parts, or combinations thereof, and should be understood as not precluding the existence or addition of one or more other features, numbers, steps, actions, components, parts, or combinations thereof. Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as generally understood by those skilled in the art. Terms such as those defined in commonly used dictionaries should be interpreted as having a meaning consistent with their meaning in the context of the relevant technology, and should not be interpreted in an ideal or overly formal sense unless explicitly defined in this specification. Hereinafter, embodiments will be described in detail with reference to the attached drawings. In the description with reference to the attached drawings, identical components are given the same reference numeral regardless of the drawing number, and redundant descriptions thereof will be omitted. Figure 1 is a diagram illustrating the depth inversion phenomenon. Referring to FIG. 1, a light field system may include an image acquisition (or pick-up) unit that collects light reflected from an object and an image display unit that plays back the image. The image acquisition unit may include a lens array and an image sensor (e.g., a CCD camera). The image display unit may include a display panel and a lens array. Integral imaging technology can record multiple 2D images of a 3D scene from various perspectives using a lens array (or multiple cameras) of the image acquisition unit. Light reflected from a 3D object can pass through the lens array and reach an image sensor. The image sensor can separate and record light entering from various directions through the lens array into individual elemental images. The depth information of the object can be determined based on the relative positions of the elemental images according to the arrangement (or location) of the lens array. The display panel of the image display unit outputs element images, and the output element images can be provided to the user