CN-115797176-B - Image super-resolution reconstruction method
Abstract
The invention discloses an image super-resolution reconstruction method which comprises the steps of obtaining a training data set, carrying out augmentation treatment on the training data set, constructing an image super-resolution reconstruction network model, training the constructed image super-resolution reconstruction network model through the augmented training data set, inputting an image to be reconstructed into the trained image super-resolution reconstruction network model to obtain a reconstructed super-resolution image.
Inventors
- XU BINBIN
- ZHENG YUHUI
Assignees
- 南京信息工程大学
Dates
- Publication Date
- 20260505
- Application Date
- 20221130
Claims (6)
- 1. An image super-resolution reconstruction method, which is characterized by comprising the following steps: Acquiring a training data set and performing augmentation treatment on the training data set; constructing an image super-resolution reconstruction network model; training the constructed image super-resolution reconstruction network model through the training data set after the augmentation treatment; Inputting the image to be reconstructed into a trained image super-resolution reconstruction network model Obtaining a reconstructed super-resolution image; the image super-resolution reconstruction network model comprises a shallow feature extraction module, a non-local contrast enhancement residual group module, an up-sampling module and a reconstruction module; the shallow feature extraction module is used for extracting shallow features of the amplified training data set to obtain a shallow image feature map ; The non-local contrast enhancement residual group module is used for carrying out image characteristic diagram according to shallow layer Obtaining image depth feature output ; The up-sampling module is used for outputting image depth characteristics Up-sampling to obtain up-sampling feature map, which is recorded as ; The reconstruction module is used for sampling the characteristic diagram according to the up-sampling Reconstructing to obtain reconstructed high-resolution image, and recording as ; The non-local contrast enhancement residual error group module comprises two non-local contrast attention modules and a second-order attention sharing source residual error group module; Wherein one non-local contrast attention module is used for carrying out image characteristic diagram according to shallow layer Obtaining non-local attention profile The following formulas (1), (2) and (3) are shown: ,(1) (2) (3) Wherein, the In order for the magnification factor to be a factor, 、 And Respectively representing the feature map of the shallow image Three different mappings of the above are provided, 、 And Respectively, the corresponding characteristic transformation functions, And Respectively is a characteristic diagram passing through Mapped location And pass through Mapped location Is characterized by the pixel characteristics of (a), Is an unbiased approximation function, Normalized terms in the softmax function; the second-order attention sharing source residual group module is used for generating a non-local attention characteristic diagram Extracting depth features of an image As shown in formula (4) (4) Wherein the second-order attention sharing source residual error group module is formed by connecting a plurality of second-order attention sharing source residual error modules in series, The weights of the convolution layers are represented, Is the first The outputs of the second order attention sharing source residual modules; (5) Wherein, the Represent the first The second order attention shares the function of the source residual block, Is the first An input of a second order attention sharing source residual module; Another non-local contrast attention module is used for determining depth characteristics of the image Obtaining deeper image depth features 。
- 2. The method for reconstructing an image super-resolution according to claim 1, wherein the network model for reconstructing an image super-resolution further comprises an adaptive target generating module for generating an adaptive target pair trained network model for reconstructing an image super-resolution Continuing training to obtain a final image super-resolution reconstruction network model ; Inputting the image to be reconstructed into a final image super-resolution reconstruction network model Obtaining a final high resolution reconstructed image, denoted as 。
- 3. The method for reconstructing the super-resolution image according to claim 1, wherein the super-resolution image reconstruction network model is trained Loss function The method comprises the following steps: (6) Wherein, the Representing the first of the super-resolution reconstructed network model predictive images A number of pixels of the pixel array are arranged, First representing original high resolution image A number of pixels of the pixel array are arranged, Representing the total pixel count of the image.
- 4. The method for reconstructing super-resolution image as recited in claim 2, wherein said generating an adaptive target pair trains a network model for reconstructing super-resolution image Continuing training to obtain a final image super-resolution reconstruction network model Comprising the following steps: generating an adaptive target image by means of (7) ; (7) Wherein, the Representing the function to which the ATG module corresponds, 、 Representing the current predicted output of the model and the corresponding original high resolution image respectively by adapting the target image Further training of the model Obtaining a final image super-resolution reconstruction network model ; (8) Wherein, the Is that The loss function is a function of the loss, Representing the order of the pixels in the image.
- 5. An image super-resolution reconstruction device is characterized by comprising a processor and a storage medium; The storage medium is used for storing instructions; The processor being operative according to the instructions to perform the steps of the method according to any one of claims 1 to 4.
- 6. A storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the method according to any of claims 1 to 4.
Description
Image super-resolution reconstruction method Technical Field The invention belongs to the technical field of image processing, and particularly relates to an image super-resolution reconstruction method. Background The image super-resolution reconstruction technology is one of hot research directions in the field of computer vision, and has wide application value in various fields such as video monitoring, medical images, video perception and the like. The concept is to reconstruct a low resolution image into a desired high resolution image through a specific algorithm. In many other computer vision tasks, such as image segmentation and target detection, the recognition capability and recognition accuracy can be further improved through an image super-resolution reconstruction technology. With the development of deep learning technology in recent years, the convolutional neural network method has remarkable advantages in extracting image features, has made great progress in the task of reconstructing the super-resolution of the image, and proposes a plurality of efficient algorithms to solve the reconstruction problem of images of different scales, but the existing network model still has the problems of large model parameters, serious image reconstruction artifacts, low local detail resolution and the like, so that the research of the image super-resolution reconstruction technology is still a difficult task. Disclosure of Invention In order to solve the problems in the prior art, the invention provides an image super-resolution reconstruction method which can reconstruct the image super-resolution. The technical problems to be solved by the invention are realized by the following technical scheme: In a first aspect, an image super-resolution reconstruction method is provided, including: Acquiring a training data set and performing augmentation treatment on the training data set; constructing an image super-resolution reconstruction network model; training the constructed image super-resolution reconstruction network model through the training data set after the augmentation treatment; inputting an image to be reconstructed into a trained image super-resolution reconstruction network model f to obtain a reconstructed super-resolution image; the image super-resolution reconstruction network model comprises a shallow feature extraction module, a non-local contrast enhancement residual group module, an up-sampling module and a reconstruction module; The shallow feature extraction module is used for extracting shallow features of the amplified training data set to obtain a shallow image feature map F 0; The non-local contrast enhancement residual group module is used for obtaining image depth characteristic output F DF according to the shallow image characteristic map F 0; the up-sampling module is used for up-sampling the image depth characteristic output F DF to obtain an up-sampling characteristic diagram, which is marked as F ↑; the reconstruction module is configured to reconstruct the upsampled feature map F ↑ to obtain a reconstructed high-resolution image, denoted as x SR. With reference to the first aspect, further, the image super-resolution reconstruction network model further includes an adaptive target generating module, configured to generate an adaptive target, and continuously train the trained image super-resolution reconstruction network model F to obtain a final image super-resolution reconstruction network model F; Inputting the image to be reconstructed into a final image super-resolution reconstruction network model F to obtain a final high-resolution reconstruction image, and recording as F (x LR). With reference to the first aspect, further, the non-local contrast enhancement residual error group module includes two non-local contrast attention modules and a second order attention sharing source residual error group module; One of the non-local contrast attention modules is used for obtaining a non-local attention feature map according to the shallow image feature map F 0The following formulas (1), (2) and (3) are shown: Wherein Q is an amplification factor, Q, K and V respectively represent three different mappings on the shallow image feature map F 0, θ, δ and g are respectively corresponding feature transformation functions, Q i and K j are respectively pixel features of the feature map at a position i mapped by Q and a position j mapped by K, Φ is an unbiased approximation function, and D is a normalization term in a softmax function; the second-order attention sharing source residual group module is used for generating a non-local attention characteristic diagram Depth feature F G of the extracted image is shown as (4) The second-order attention sharing source residual error group module is formed by connecting a plurality of second-order attention sharing source residual error modules in series, W SSC represents the weight of a convolution layer, and F g is the output of the g sec