Search

KR-20260066922-A - Progressive spatial adaptive matching method for reference-based super resolution and system therefor

KR20260066922AKR 20260066922 AKR20260066922 AKR 20260066922AKR-20260066922-A

Abstract

An embodiment of the present invention provides a progressive matching method and a system for the same, comprising: (A) preparing a low-resolution image and a high-resolution reference image; (B) matching features of the low-resolution image and features of the reference image; (C) calculating a first similarity between features of the low-resolution image and features of the reference image; (D) transmitting a patch corresponding to features of the reference image to the low-resolution image based on the first similarity; (E) modifying features of the patch based on the size and/or orientation of the image; (F) calculating a second similarity between features of the modified patch and features of the low-resolution image to form an attention map; and (G) restoring high-frequency information of the low-resolution image.

Inventors

  • 전광길

Assignees

  • 인천대학교 산학협력단

Dates

Publication Date
20260512
Application Date
20241105

Claims (20)

  1. (A) A step of preparing a low-resolution image and a high-resolution reference image; (B) A step of matching the features of the low-resolution image and the features of the reference image; (C) A step of calculating a first similarity between the features of the low-resolution image and the features of the reference image; (D) a step of transmitting a patch corresponding to a feature of the reference image to the low-resolution image based on the first similarity above; (E) A step of modifying the features of the patch based on the size and/or orientation of the image; (F) a step of forming an attention map by calculating a second similarity between the features of the modified patch and the features of the low-resolution image; and (G) A step of restoring high-frequency information of the low-resolution image; comprising a progressive matching method.
  2. In paragraph 1, The above first similarity is a progressive matching method calculated using mathematical formula 1. [Mathematical Formula 1] (Here, r i,j represents the similarity between F lr and F ref , q i represents the i-th patch of F lr , and k j represents the j-th patch of F ref .)
  3. In paragraph 2, A progressive matching method in which the position h i of the reference image patch most similar to the i-th patch of the low-resolution image is calculated using Equation 2. [Mathematical Formula 2]
  4. In paragraph 1, The above (E) step is, (E-1) A step of learning the relationship between the features of the low-resolution image and the features of the patch; (E-2) A step of reducing the size of the features of the low-resolution image and the features of the patch; (E-3) A step of normalizing the above-mentioned reduced features; and (E-4) Includes a step of outputting a predetermined number of features, A progressive matching method in which some of the above-mentioned predetermined number of features indicate whether additional modification is required, and other parts of the above-mentioned predetermined number of features indicate coordinates and values to be modified.
  5. In paragraph 4, The above (E) step is, A progressive matching method in which a loss function for controlling the above modification result is further applied.
  6. In paragraph 5, The above loss function L mc is a progressive matching method calculated using mathematical formula 4. [Mathematical Formula 4] (Here, p i represents the penalty for the i-th image, and N represents the total number of images.)
  7. In paragraph 6, The above penalty p i is calculated using mathematical formula 3, and a progressive matching method in which a small penalty is given when a correct correction is made and a large penalty is given when an incorrect correction is made. [Mathematical Formula 3] (Here, p mt represents the penalty for correct matching, p mf represents the penalty for incorrect matching, sc i represents the similarity of the i-th image after modification, and sp i represents the similarity of the i-th image before modification.)
  8. In paragraph 1, The above step (G) is a progressive matching method performed by applying a gradient map of the low-resolution image.
  9. In paragraph 8, The above gradient map is obtained by training using the residual blocks of the above low-resolution image, a progressive matching method.
  10. Memory; and It includes a processor connected to the memory and configured to execute computer-readable instructions contained in the memory, and The above processor is, (A) An operation of matching features of a low-resolution image with features of a reference image; (B) An operation to calculate a first similarity between the features of the low-resolution image and the features of the reference image; (C) An operation of transmitting a patch corresponding to a feature of the reference image to the low-resolution image based on the first similarity above; (D) An action to modify the features of the patch based on the size and/or orientation of the image; (E) an operation to form an attention map by calculating a second similarity between the features of the modified patch and the features of the low-resolution image; and (F) A progressive matching system capable of performing an operation to restore high-frequency information of the low-resolution image;
  11. In Paragraph 10, A progressive matching system in which the above first similarity can be calculated using mathematical formula 1. [Mathematical Formula 1] (Here, r i,j represents the similarity between F lr and F ref , q i represents the i-th patch of F lr , and k j represents the j-th patch of F ref .)
  12. In Paragraph 11, A progressive matching system in which the location h i of the reference image patch most similar to the i-th patch of the low-resolution image can be calculated using Equation 2. [Mathematical Formula 2]
  13. In Paragraph 10, The above (D) operation is, (D-1) An operation to learn the relationship between the features of the low-resolution image and the features of the patch; (D-2) An operation to reduce the size of the features of the low-resolution image and the features of the patch; (D-3) An operation to normalize the above-mentioned reduced features; and (D-4) Includes an operation to output a predetermined number of features, and A progressive matching system in which some of the above-mentioned predetermined number of features indicate whether additional modification is required, and other parts of the above-mentioned predetermined number of features indicate coordinates and values to be modified.
  14. In Paragraph 13, The above (D) operation is, A progressive matching system in which a loss function to control the above modification result can be further applied.
  15. In Paragraph 14, The above loss function L mc is a progressive matching system that can be calculated using Equation 4. [Mathematical Formula 4] (Here, p i represents the penalty for the i-th image, and N represents the total number of images.)
  16. In paragraph 15, The above penalty p i is calculated using mathematical formula 3, and a progressive matching system in which a small penalty is given if a correct correction is made and a large penalty can be given if an incorrect correction is made. [Mathematical Formula 3] (Here, p mt represents the penalty for correct matching, p mf represents the penalty for incorrect matching, sc i represents the similarity of the i-th image after modification, and sp i represents the similarity of the i-th image before modification.)
  17. In Paragraph 10, The above (F) operation can be performed by applying a gradient map of the low-resolution image, a progressive matching system.
  18. In Paragraph 17, The above gradient map is a progressive matching system that can be obtained by training using residual blocks of the above low-resolution image.
  19. (A) An operation of matching features of a low-resolution image with features of a reference image; (B) An operation to calculate a first similarity between the features of the low-resolution image and the features of the reference image; (C) An operation of transmitting a patch corresponding to a feature of the reference image to the low-resolution image based on the first similarity above; (D) An action to modify the features of the patch based on the size and/or orientation of the image; (E) an operation to form an attention map by calculating a second similarity between the features of the modified patch and the features of the low-resolution image; and (F) A computer-readable storage medium for executing a progressive matching method, wherein a program for executing an operation to restore high-frequency information of the low-resolution image is recorded thereon.
  20. In Paragraph 19, The above (D) operation is, (D-1) An operation to learn the relationship between the features of the low-resolution image and the features of the patch; (D-2) An operation to reduce the size of the features of the low-resolution image and the features of the patch; (D-3) An operation to normalize the above-mentioned reduced features; and (D-4) Includes an operation to output a predetermined number of features, and A computer-readable storage medium for executing a progressive matching method, wherein some of the above-mentioned predetermined number of features indicate whether additional modification is required, and other parts of the above-mentioned predetermined number of features indicate coordinates and values to be modified.

Description

Progressive spatial adaptive matching method for reference-based super resolution and system therefor The present invention relates to a progressive matching method for reference-based super-resolution and a system for the same, which can modify the features of a matched low-resolution image and the features of a reference image based on size and/or direction. Image super-resolution (SR) is a low-level computer vision task that aims to reconstruct a sharp and clear high-resolution (HR) image from a low-resolution (LR) image. As a result, SR-based tasks are widely applied in various fields (e.g., medical imaging, remote sensing imaging, facial imaging, etc.) to improve image quality. Recently, single image super-resolution (SISR) has been advancing significantly. For example, research focusing on attention mechanisms and deepening the depth of convolutional neural networks has been conducted to achieve better performance results. Additionally, recursion-based methods are also gaining attention. However, SISR still faces significant challenges in reconstructing lost high-frequency information. To address these issues, numerous SISR methods based on Generative Adversarial Networks (GANs) have been developed. However, these methods can generate unexpected hallucinations and artifacts while generating high-frequency information. To generate more realistic high-frequency information, a reference-based super-resolution (RefSR) method has also been proposed, which transfers texture information from a reference image that is more suitable for low-resolution images. Recently, RefSR has made significant progress in generating visually satisfactory results through reconstruction. However, problems still exist in matching low-resolution images with reference images, which can result in unsatisfactory SR images. For example, a method has been proposed to efficiently match highly correlated image patches using features matched from the reference image to the low-resolution image. This matching method has been widely used in reference-based super-resolution due to its high matching performance. However, this matching approach faces two problems when transmitting features from a reference image. First, it matches image patches in isolation from the features of the low-resolution image and the reference image. This fails to consider that other features are required for the reconstruction results of continuous upsampling. Second, since there are often differences in scale and orientation between the low-resolution image and the reference image, simply calculating the relevance of image patches may not result in the transmission of the correct features. Figure 1 is a photograph showing the difference in matching methods according to an embodiment and a comparative example of the present invention. FIG. 2 is a diagram schematically illustrating the framework of a progressive spatial adaptation module (PSAM) according to an embodiment of the present invention. Figure 3 is a diagram showing a qualitative comparison for the CUFED5 test set. Figure 4 is a diagram showing a qualitative comparison for the Sun80 test set. Figure 5 is a diagram showing a qualitative comparison for the Urban100 test set. Figure 6 is a diagram showing the results of the ablation experiment for the space modification module. FIG. 7 is a flowchart of a progressive matching method for reference-based super-resolution according to an embodiment of the present invention. FIG. 8 is a block diagram showing a progressive matching system for reference-based super-resolution according to an embodiment of the present invention. Hereinafter, embodiments disclosed in this specification will be described in detail with reference to the attached drawings. Identical or similar components are assigned the same reference number regardless of drawing symbols, and redundant descriptions thereof will be omitted. The suffixes "module" and "part" used for components in the following description are assigned or used interchangeably solely for the ease of drafting the specification and do not inherently possess distinct meanings or roles. In this description, expressions such as “include,” “equip,” or “compose” are intended to refer to certain characteristics, numbers, steps, actions, elements, parts or combinations thereof, and should not be interpreted to exclude the existence or possibility of one or more other characteristics, numbers, steps, actions, elements, parts or combinations thereof other than those described. In addition, when describing the embodiments disclosed in this specification, if it is determined that a detailed description of related prior art could obscure the essence of the embodiments disclosed in this specification, such detailed description is omitted. The attached drawings are intended only to facilitate understanding of the embodiments disclosed in this specification, and the technical concept disclosed in this specification is not limited by the at