Search

CN-115661642-B - Image processing method, model training method, device and medium

CN115661642BCN 115661642 BCN115661642 BCN 115661642BCN-115661642-B

Abstract

The embodiment of the disclosure discloses an image processing method, a model training method, equipment and a medium, wherein the method comprises the steps of obtaining an image to be processed; the method comprises the steps of inputting an image to be processed into a first target image processing model to obtain a first target feature image output by the first target image processing model, subtracting the first target feature image from the image to be processed to obtain a target pre-processed image, inputting the target pre-processed image into a second target image processing model to obtain a second target feature image output by the second target image processing model, and adding the first target feature image and the second target feature image to obtain a target image. The target image extracted by the technical scheme can comprise the main features and the secondary features in the image to be processed, so that the image quality of the target image can be ensured to meet the requirements, the success rate of image recognition based on the processed image is improved, and the user experience is improved.

Inventors

  • GUO FENG

Assignees

  • 元气森林(北京)食品科技集团有限公司

Dates

Publication Date
20260505
Application Date
20221020

Claims (9)

  1. 1. An image processing method, the method comprising: acquiring an image to be processed; Inputting the image to be processed into a first target image processing model which is obtained through pre-training so as to obtain a first target characteristic image which is output by the first target image processing model; subtracting the first target characteristic image from the image to be processed to obtain a target preprocessing image; Inputting the target preprocessing image into a second target image processing model which is obtained through pre-training so as to obtain a second target characteristic image which is output by the second target image processing model; Training the first target image processing model and the second target image processing model by: Acquiring a training output image, and performing overexposure processing or underexposure processing on the training output image to acquire a training input image; Training the first image processing model and the second image processing model by taking the training input image as the input of a first image processing model, taking the training output image as the sum of a first characteristic image and a second characteristic image, wherein the output of the first image processing model is the first characteristic image, the input of the second image processing model is a preprocessing image obtained by subtracting the first characteristic image from the training input image, and the output of the second image processing model is the second characteristic image; The training the first image processing model and the second image processing model with the training input image as the input of the first image processing model and the training output image as the sum of the first characteristic image and the second characteristic image includes: Taking the training input image as input of a first image processing model, taking the training output image as sum of a first characteristic image and a second characteristic image, and acquiring an error matrix M b according to M b =|M o -M i |, wherein M o is a training output matrix corresponding to the training output image, and M i is a training input matrix corresponding to the training input image; According to loss = Σk xyz /(255) N) obtaining a training loss, wherein K xyz is an element with coordinates (x, y, z) in the error matrix M b , n=max (x) max(y) max(z); Training the first image processing model and the second image processing model according to the training loss; Determining the first image processing model as a first target image processing model and the second image processing model as a second target image processing model in response to convergence of both the first image processing model and the second image processing model; and adding the first target feature image and the second target feature image to obtain a target image.
  2. 2. The image processing method according to claim 1, wherein the acquiring the image to be processed includes: acquiring at least one acquired image acquired by an image acquisition device in a showcase; and determining the image to be processed in the at least one acquired image in response to the result of payment processing according to the at least one acquired image being a payment failure.
  3. 3. The image processing method according to claim 2, wherein the determining the image to be processed in the at least one captured image in response to the result of the payment processing according to the at least one captured image being a payment failure, comprises: And determining an overexposed image and/or an underexposed image in the at least one acquired image as the image to be processed in response to the payment failure according to the result of the payment processing of the at least one acquired image.
  4. 4. The image processing method according to claim 1, wherein the first target image processing model and the second target image processing model are each U Net model.
  5. 5. The image processing method according to claim 1, wherein the acquiring a training output image includes: acquiring at least one acquired image acquired by an image acquisition device in a showcase; And determining the training output image in the at least one acquired image in response to the result of payment processing according to the at least one acquired image being successful in payment.
  6. 6. The image processing method according to claim 5, wherein the determining the training output image in the at least one captured image in response to the result of the payment processing based on the at least one captured image being successful in payment, comprises: And determining a clear image in the at least one acquired image as the training output image in response to successful payment according to the result of payment processing of the at least one acquired image.
  7. 7. An image processing apparatus, comprising: An image acquisition module to be processed, is configured to acquire an image to be processed; the first characteristic image acquisition module is configured to input the image to be processed into a first target image processing model which is obtained through training in advance so as to acquire a first target characteristic image output by the first target image processing model; A target pre-processing image acquisition module configured to subtract the first target feature image from the image to be processed to acquire a target pre-processing image; the second characteristic image acquisition module is configured to input the target preprocessing image into a second target image processing model which is obtained through training in advance so as to acquire a second target characteristic image output by the second target image processing model; a target image acquisition module configured to add the first target feature image and the second target feature image to obtain a target image; The image processing apparatus further includes: the training image acquisition module is configured to acquire a training output image and perform overexposure processing or underexposure processing on the training output image so as to acquire a training input image; a model training module configured to train the first image processing model and the second image processing model with the training input image as an input of a first image processing model, the training output image as a sum of a first feature image and a second feature image, wherein an output of the first image processing model is the first feature image, an input of the second image processing model is a preprocessed image obtained by subtracting the first feature image from the training input image, and an output of the second image processing model is the second feature image; A model determination module configured to determine the first image processing model as a first target image processing model and the second image processing model as a second target image processing model in response to convergence of both the first image processing model and the second image processing model; The model training module is specifically configured to, when the training input image is used as an input of a first image processing model, the training output image is used as a sum of a first feature image and a second feature image, and the first image processing model and the second image processing model are trained: Taking the training input image as input of a first image processing model, taking the training output image as sum of a first characteristic image and a second characteristic image, and obtaining an error matrix Mb according to Mb= |Mo-Mi|, wherein Mo is a training output matrix corresponding to the training output image, and Mi is a training input matrix corresponding to the training input image; According to loss = Σ Kxyz/(255) N) obtaining a training loss, kxyz being an element of the error matrix Mb with coordinates (x, y, z), n=max (x) max(y) max(z); And training the first image processing model and the second image processing model according to the training loss.
  8. 8.An electronic device comprising a memory and at least one processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the at least one processor to implement claim 1 6, And the method step is carried out according to any one of the following steps.
  9. 9. A computer readable storage medium having stored thereon computer instructions which when executed by a processor implement the method of claim 1 6, And the method step is carried out according to any one of the following steps.

Description

Image processing method, model training method, device and medium Technical Field The disclosure relates to the technical field of image processing, in particular to an image processing method, a model training method, model training equipment and a medium. Background In recent years, when a merchant or an enterprise stores articles, in order to facilitate a user to know information of the articles, the articles may be placed in a showcase, so as to store and display the articles at the same time. When the user needs to use the articles in the showcase, the user can open the showcase by himself and take out the corresponding articles from the showcase. In such a scenario, items removed from the display case may be identified by the display case and the identification uploaded so that other devices or systems, such as servers, clouds, etc., may determine the user removing items from the display case or the items removed from the display case, etc., based on the identification uploaded by the display case, to make corresponding statistics. Disclosure of Invention The embodiment of the disclosure provides an image processing method, a model training method, equipment and a medium, which are used for solving the problem of poor display cabinet image processing effect in the related art. In a first aspect, an embodiment of the present disclosure provides an image processing method. Specifically, the image processing method includes: acquiring an image to be processed; inputting an image to be processed into a first target image processing model obtained through pre-training so as to obtain a first target characteristic image output by the first target image processing model; Subtracting the first target characteristic image from the image to be processed to obtain a target preprocessing image; inputting the target preprocessing image into a second target image processing model obtained by pre-training so as to obtain a second target characteristic image output by the second target image processing model; and adding the first target feature image and the second target feature image to obtain a target image. In one implementation of the present disclosure, acquiring an image to be processed includes: acquiring at least one acquired image acquired by an image acquisition device in a showcase; and determining an image to be processed in the at least one acquired image in response to the payment failure as a result of the payment processing according to the at least one acquired image. In one implementation of the present disclosure, in response to a payment failure as a result of payment processing according to at least one acquired image, determining an image to be processed in the at least one acquired image includes: And determining an overexposed image and/or an underexposed image in the at least one acquired image as an image to be processed in response to the payment failure according to the result of the payment processing of the at least one acquired image. In one implementation of the present disclosure, the first target image processing model and the second target image processing model are both U-Net models. In a second aspect, a model training method is provided in an embodiment of the present disclosure. Specifically, the model training method comprises the following steps: acquiring a training output image, and performing overexposure processing or underexposure processing on the training output image to acquire a training input image; Training the first image processing model and the second image processing model by taking a training input image as input of a first image processing model, taking a training output image as sum of a first characteristic image and a second characteristic image, wherein the output of the first image processing model is the first characteristic image, the input of the second image processing model is a preprocessing image obtained by subtracting the first characteristic image from the training input image, and the output of the second image processing model is the second characteristic image; In response to convergence of both the first image processing model and the second image processing model, the first image processing model is determined as a first target image processing model and the second image processing model is determined as a second target image processing model. In one implementation of the present disclosure, training a first image processing model and a second image processing model with a training input image as an input of the first image processing model and a training output image as a sum of the first feature image and the second feature image includes: Taking a training input image as input of a first image processing model, taking a training output image as sum of a first characteristic image and a second characteristic image, and acquiring an error matrix M b according to M b=|Mo-Mi |, wherein M o is a training output matrix corresponding to the tr