Search

US-20260129170-A1 - IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS

US20260129170A1US 20260129170 A1US20260129170 A1US 20260129170A1US-20260129170-A1

Abstract

Disclosed are methods and apparatuses for decoding an image. A method includes receiving a bitstream obtained by encoding the image; dividing a first coding block into a plurality of second coding blocks; generating a prediction block of a second coding block based on syntax information obtained from the bitstream; and reconstructing the second coding block based on the prediction block and a residual block of the second coding block, the residual block being obtained by performing a dequantization and an inverse-transform on quantized transform coefficients from the bitstream. The first coding block has a recursive division structure. The first coding block is divided based on at least one of a quad tree division, a binary tree division or a triple tree division.

Inventors

  • Ki Baek Kim

Assignees

  • B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.

Dates

Publication Date
20260507
Application Date
20260105
Priority Date
20161004

Claims (6)

  1. 1 . A method of decoding an image, comprising: predicting blocks in the image to generate prediction blocks of the blocks; inverse-quantizing quantized transform coefficients of the blocks to generate inverse-quantized transform coefficients of the blocks; inverse-transforming the inverse-quantized transform coefficients to derive residual blocks of the blocks; reconstructing the image based on the prediction blocks and the residual blocks; obtaining information indicating a position of a region in the reconstructed image from a bitstream; and identifying the region based on the information, wherein the region is related to an object included in the reconstructed image.
  2. 2 . The method of claim 1 , wherein a size of the reconstructed image is smaller than a size of an image indicated by encoding information.
  3. 3 . The method of claim 1 , wherein the information is obtained from a supplemental enhancement information (SEI) message of the bitstream.
  4. 4 . The method of claim 1 , based on a value of flag obtained from the bitstream, the information is determined depending on information included in a previous SEI message.
  5. 5 . A method of encoding an image, comprising: predicting blocks in the image to generate prediction blocks of the blocks; transforming residual blocks of the blocks to derive transform coefficients of the blocks; quantizing the transform coefficients to generate quantized transform coefficients of the blocks; reconstructing the image based on the prediction blocks and the residual blocks; identifying a region in the reconstructed image; and generating information indicating a position of the identified region, wherein the region is related to an object included in the reconstructed image.
  6. 6 . A method of transmitting a bitstream, comprising: predicting blocks in an image to generate prediction blocks of the blocks; transforming residual blocks of the blocks to derive transform coefficients of the blocks; quantizing the transform coefficients to generate quantized transform coefficients of the blocks; reconstructing the image based on the prediction blocks and the residual blocks; identifying a region in the reconstructed image; generating information indicating a position of the identified region, generating the bitstream including the generated information; and transmitting the bitstream, wherein the region is related to an object included in the reconstructed image.

Description

RELATED APPLICATIONS This application is a continuation application of U.S. patent application Ser. No. 18/771,226, filed Jul. 12, 2024, which is a continuation application of U.S. patent application Ser. No. 18/300,574, filed Apr. 14, 2023, which is now U.S. Pat. No. 12,323,568, which is a continuation application of U.S. patent application Ser. No. 17/073,225, filed Oct. 16, 2020, which is now U.S. Pat. No. 12,028,503, which is a continuation application of U.S. patent application Ser. No. 16/372,251, filed Apr. 1, 2019, which is a continuation application of the International Patent Application Serial No. PCT/KR2017/011144, filed Oct. 10, 2017, which claims priority to the Korean Patent Application Serial No. 10-2016-0127883, filed Oct. 4, 2016; the Korean Patent Application Serial No. 10-2016-0129383, filed Oct. 6, 2016; and the Korean Patent Application Serial No. 10-2017-0090613, filed Jul. 17, 2017. All of these applications are incorporated by reference herein in their entireties. TECHNICAL FIELD The present invention relates to image data encoding and decoding technology, and more particularly, to a method and apparatus for encoding and decoding a 360-degree image for realistic media service. BACKGROUND With the spread of the Internet and mobile terminals and the development of information and communication technology, the use of multimedia data is increasing rapidly. Recently, demand for high-resolution images and high-quality images such as a high definition (HD) image and an ultra high definition (UHD) image is emerging in various fields, and demand for realistic media service such as virtual reality, augmented reality, and the like is increasing rapidly. In particular, since multi-view images captured with a plurality of cameras are processed for 360-degree images for virtual reality and augmented reality, the amount of data generated for the processing increases massively, but the performance of an image processing system for processing a large amount of data is insufficient. As described above, in an image encoding and decoding method and apparatus of the related art, there is a demand for improvement of performance in image processing, particularly, image encoding/decoding. SUMMARY It is an object of the present invention to provide a method for improving an image setting process in initial steps for encoding and decoding. More particularly, the present invention is directed to providing an encoding and decoding method and apparatus for improving an image setting process in consideration of the characteristics of a 360-degree image. According to an aspect of the present invention, there is provided a method of decoding a 360-degree image. Here, the method of decoding a 360-degree image may include receiving a bitstream including an encoded 360-degree image, generating a predicted image with reference to syntax information acquired from the received bitstream, acquiring a decoded image by combining the generated predicted image with a residual image acquired by inversely quantizing and inversely transforming the bitstream, and reconstructing the decoded image into the 360-degree image according to a projection format. Here, the syntax information may include projection format information for the 360-degree image. Here, the projection format information may be information indicating at least one of an Equi-Rectangular Projection (ERP) format in which the 360-degree image is projected into a 2D plane, a CubeMap Projection (CMP) format in which the 360-degree image is projected to a cube, an OctaHedron Projection (OHP) format in which the 360-degree image is projected to an octahedron, and an IcoSahedral Projection (ISP) format in which the 360-degree image is projected to a polyhedron. Here, the reconstructing may include acquiring arrangement information according to region-wise packing with reference to the syntax information and rearranging blocks of the decoded image according to the arrangement information. Here, the generating of the predicted image may include performing image expansion on a reference picture acquired by restoring the bitstream, and generating a predicted image with reference to the reference picture on which the image expansion is performed. Here, the performing of the image expansion may include performing image expansion on the basis of partitioning units of the reference picture. Here, the performing of the image expansion on the basis of the partitioning units may include generating an expanded region individually for each partitioning unit by using the reference pixel of the partitioning unit. Here, the expanded region may be generated using a boundary pixel of a partitioning unit spatially adjacent to a partitioning unit to be expanded or using a boundary pixel of a partitioning unit having image continuity with a partitioning unit to be expanded. Here, the performing of the image expansion on the basis of the partitioning units may include generating an expanded