CN-121998856-A - Single image rain removing method, system, medium and equipment based on self-supervision learning
Abstract
Inputting a rainy day image of a rainy day image field into a generator, determining a generated clean image, adopting a first discriminator, and adopting a contrast loss training generator based on contrast loss of the first discriminator and contrast learning of maximum mutual information; the method comprises the steps of generating a clean image, applying a predefined inverse function to the generated clean image to determine rain streak noise, determining a synthesized rain day image according to the clean image and the rain streak noise of a target image domain, synthesizing a rain day/clean image pair on line, training a generator by adopting a second discriminator based on the countermeasures of the second discriminator, training the generator by adopting the rain day/clean image pair and a preset integral target function, and realizing image rain removal. According to the application, on-line synthesis of a rainy day/clean image pair without supervision, and the strong practicability of the trained rain removal model and the generalization capability of the cross-data set are improved.
Inventors
- SUN NINGYU
- YANG XIAOKANG
- Yan Diechao
- DUAN HUIYU
- FU KANG
Assignees
- 上海交通大学
Dates
- Publication Date
- 20260508
- Application Date
- 20260119
Claims (10)
- 1. A method for removing rain from a single image based on self-supervision learning is characterized by comprising the following steps: inputting a rainy day image of a rainy day image field into a generator, determining a generated clean image, training the generator based on a first pair of loss resistance and contrast loss of contrast learning of maximized mutual information by adopting a first discriminator, and determining a generator subjected to first training; applying a predefined inverse function to the generated clean image to determine rain streak noise; Determining a synthesized rainy day image according to the clean image of the target image domain and the rain stripe noise, and synthesizing a rainy day/clean image pair on line; training the first trained generator based on a second countermeasures loss using a second discriminant to determine a second trained generator; performing self-supervision contrast training on the generator subjected to the second training by adopting the rainy day/clean image pair and a preset integral objective function, and determining a rain removal model for completing the training; and inputting the rainy day image to be processed into the rainy model which is trained, and determining a rainy image.
- 2. The self-supervised learning based single image rain removal method of claim 1, further comprising: defining a forward function: ; Wherein, the A rain image representing the rain image field, A clean image after rain removal of the rain image representing the rain image field, Representing the noise of the rain stripe, Representing a predefined forward function; defining a reverse function: ; Wherein, the A rain image representing the rain image field, A clean image after rain removal of the rain image representing the rain image field, Representing the noise of the rain stripe, Representing a predefined inverse function.
- 3. The self-supervised learning based single image rain removal method of claim 1, wherein the first discriminator is configured to discriminate between a generated clean image and a true clean image; The method for inputting the rainy day image of the rainy day image domain into a generator, determining a generated clean image, training the generator based on the contrast loss of the contrast training and the contrast loss of the contrast learning of the maximized mutual information by adopting a first discriminator, and determining the generator after the first training comprises the following steps: inputting a rainy day image of the rainy day image field into the generator, and determining a generated clean image; Distinguishing the generated clean image by adopting the first discriminator, and if the first discriminator judges the generated clean image as the generated clean image, optimizing parameters of the generator according to the first countermeasures; and determining the first trained generator by optimizing parameters of the generator by using the contrast loss by maximizing mutual information between the rainy image of the rainy image domain and the generated clean image.
- 4. The self-supervised learning based single image rain removal method of claim 1, wherein applying a predefined inverse function to the generated clean image determines rain streak noise, comprising: ; Wherein, the Representing the noise of the rain stripe, A rain image representing the rain image field, Representing the generated clean image in question, Representing the predefined inverse function.
- 5. The self-supervised learning based single image rain removal method of claim 2, wherein the determining a composite rain image from the clean image of the target image field and the rain streak noise and online compositing a rain/clean image pair comprises: randomly selecting a clean image of the target image domain; Summing the clean image of the target image domain with the rain stripe noise to determine the synthesized rainy day image; and determining the rainy day/clean image pair according to the synthesized rainy day image and the clean image of the target image domain of the synthesized rainy day image.
- 6. The self-supervised learning based single image rain removal method of claim 1, wherein the second discriminator is configured to distinguish between the synthesized rainy day image and a true rainy day image; Said training said first trained generator using a second discriminant based on a loss of antagonism of said second discriminant, determining a second trained generator comprising: And distinguishing the synthesized rainy day image by adopting the second discriminator, and if the second discriminator judges that the synthesized rainy day image is the synthesized rainy day image, optimizing the parameters of the generator subjected to the first training according to the antagonism loss of the second discriminator, and determining the generator subjected to the second training.
- 7. The method for removing rain from a single image based on self-supervised learning as set forth in claim 1, wherein the self-supervised contrast training is performed on the second trained generator using the rainy/clean image pair and a preset overall objective function, and determining a rain removing model for completing training includes: inputting the synthesized rainy day image in the rainy day/clean image pair into the generator after the second training, and determining a generated clean image corresponding to the synthesized rainy day image; Determining overall target loss by adopting the preset overall target function according to the generated clean image corresponding to the synthesized rainy day image and the clean image of the target image domain in the rainy day/clean image pair; Optimizing parameters of the generator after the second training according to the overall target loss, and determining the rain removal model after the training; wherein, the preset integral objective function is as follows: ; Wherein, the The generator is represented by a number of such generators, Representing the first arbiter of the first set of words, Representing the second arbiter of the set of the second discriminators, Representing a loss of antagonism between the first arbiter and the generator, Representing the overall target loss as a whole, Representing a loss of antagonism between the second arbiter and the generator, Representing the patch level contrast loss of the patch, Indicating that the identity remains lost, Representing the loss of self-regularization, A weight coefficient representing a countering loss between the first arbiter and the generator, A weight coefficient representing a countering loss between the second arbiter and the generator, Weight coefficients representing the patch level contrast loss, A weight coefficient representing the identity maintenance penalty, A weight coefficient representing the self-regularization loss.
- 8. A self-supervised learning-based single image rain removal system, comprising: The system comprises a generator first training module, a first judging device and a second judging device, wherein the generator first training module is used for inputting a rainy day image of a rainy day image domain into a generator, determining a generated clean image, training the generator based on a first pair of anti-loss and contrast loss of contrast learning of maximized mutual information by adopting the first judging device, and determining a generator after the first training; The rain image decoupling module is used for applying a predefined inverse function to the generated clean image and determining rain stripe noise; The rainy day/clean image pair generating module is used for determining a synthesized rainy day image according to the clean image of the target image domain and the rain stripe noise and synthesizing the rainy day/clean image pair on line; A generator second training module for training the first trained generator based on a second counterattack loss using a second discriminant, determining a second trained generator; the rain removal model self-supervision contrast training module is used for carrying out self-supervision contrast training on the generator after the second training by adopting the rain/clean image pair and a preset integral objective function, and determining a rain removal model after the training is completed; the rain removing processing module is used for inputting the rain image to be processed into the training-completed rain removing model and determining a rain removing image.
- 9. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1-7.
- 10. An electronic device, comprising: A memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
Description
Single image rain removing method, system, medium and equipment based on self-supervision learning Technical Field The application relates to the technical field of image processing, in particular to a method, a system, a medium and equipment for removing rain from a single image based on self-supervision learning. Background Many unpredictable image degradation factors (e.g., noise, illumination variations, and bad weather conditions such as rain, fog, snow) can adversely affect downstream tasks (e.g., object detection and image segmentation). In order to improve the overall perceived quality of these degraded images and further enhance the performance of downstream visual tasks, it is necessary to automatically remove these degradation artifacts and preserve the original image information as much as possible. The single image rain (SINGLE IMAGE RAIN remote, SIRR), also known as single image rain (SINGLE IMAGE DERAINING), has a promoting effect on downstream tasks. Conventional image rain removal methods mainly use linear transformation models to remove rain streaks, but such methods lack robustness to different types of rain (e.g., different shapes, directions, etc.). Some early work treated the rain streak as a high frequency noise and solved the problem based on low rank constraints or sparse coding. However, such prior-based conventional methods have two main limitations, namely, a slow test speed and a tendency to leave rain streaks in the results or to cause excessive smoothing of image details. Deep learning technology has been widely applied to single image rain (SINGLE IMAGE RAIN remove, SIRR) tasks and has led to advanced performance. However, these methods often lack a rich variety of rainy/clean image pairs for training and thus have limitations in terms of versatility or practicality. In recent years, a plurality of single image rain removal methods based on deep learning are sequentially proposed. Fu and the like introduce a deep learning technology into the task for the first time, and rain removal is realized by training a three-layer convolutional neural network on the high-frequency domain of an image. Yang et al propose a continuous process model for sequentially detecting, estimating and removing rain streaks. Zhang et al combine the generation of a countermeasure network (GAN) mechanism with perceived loss to enhance the performance of single image rain removal. In addition, a density-aware multi-stream dense network that can jointly estimate rain density and remove rain is also presented. Then, jiang et al proposed a Multi-scale progressive fusion network (Multi-Scale Progressive Fusion Network, MSPFN) for mining cross-scale rain streak related information. However, these supervised single image rain removal methods rely heavily on pairs of rainy day/clean image training data, which are difficult to acquire in real scenes, and although rainy day images can be synthesized by realistic rendering techniques, the combination of rain types and image content is limited depending on the old, thereby affecting the generalization ability of the model in real complex scenes, and being deficient in generalizing across datasets. In recent years, an unsupervised image restoration method realized by domain migration has been attracting attention. In the pairing-less translation setting, loop consistency has become a standard method of enforcing image correspondence by learning the inverse mapping of the target domain to the input domain to check whether the input can be reconstructed. This loop consistency is used not only between two image fields (e.g., cycleGAN and DualGAN), but also between an image field and a potential space (e.g., UNIT and MUNIT). However, these methods have difficulty capturing complex transformations from degraded images to clean images. In addition to these generic image-to-image translation models, some studies have explored an unsupervised learning approach towards specific image restoration tasks. Yuan et al propose a nested CycleGAN model for unsupervised image super resolution. Du et al introduced discrete resolvable representation learning into the image denoising task. However, it is not ideal to directly apply the image domain migration method based on the loop consistency to the image rain removal task effect, and these methods implicitly assume that there is a one-to-one correspondence between the rainy day image and the clean image, but in reality, one clean scene may correspond to a plurality of different rain conditions. Recently, unsupervised unpaired image-to-image translation methods become an important research direction for image raining, which does not depend on any paired rainy/clean images, but directly applying these methods to a single image raining task can cause problems of detail loss or inconsistent background, limiting its practical applicability, in particular because these methods lack paired supervision, and cannot accurately distinguish subt