Search

CN-121982122-A - Model processing method, portrait generating method and related products

CN121982122ACN 121982122 ACN121982122 ACN 121982122ACN-121982122-A

Abstract

The embodiment of the application discloses a model processing method, a portrait generating method and related products. The model processing method comprises the steps of obtaining a sample data set of a sample object, wherein the sample data set comprises a sample image, a sample reference image and a sample portrait, inputting the sample data set into a model to perform image processing to obtain a predicted portrait of the sample object, performing feature extraction on the sample image and the sample reference image to obtain face features and hairstyle features of the sample object, generating a reference portrait of the sample object according to the face features and the hairstyle features, determining sample portrait features of the sample object according to the reference portrait, the face features and the hairstyle features, and generating the predicted portrait of the sample object according to the sample portrait features. The model parameters are adjusted based on the sample image, the predicted portrait, the sample portrait, and the reference portrait. The application can train the model with excellent performance and portrait generating capability, and the trained model can generate clear and natural portrait.

Inventors

  • CHEN SHENG

Assignees

  • 马上消费金融股份有限公司

Dates

Publication Date
20260505
Application Date
20241030

Claims (12)

  1. 1. A model processing method, comprising: the method comprises the steps of obtaining a sample data set of a sample object, wherein the sample data set comprises a sample image, a sample reference image and a sample portrait, the sample reference image comprises a background area image, a face area image and a hair style area image, and the sample image is obtained by removing the hair style area image in the sample reference image; The method comprises the steps of inputting a sample data set into a first model to obtain a predicted portrait of a sample object, wherein the image processing comprises the steps of extracting features of the sample image and a sample reference image to obtain a first face feature and a first profile feature of the sample object, generating a reference portrait of the sample object according to the first face feature and the first profile feature, determining the sample portrait feature of the sample object according to the reference portrait, the first face feature and the first profile feature, and generating the predicted portrait according to the sample portrait feature; and adjusting model parameters of the first model according to the sample image, the predicted portrait, the sample portrait and the reference portrait to obtain a second model.
  2. 2. The method of claim 1, wherein the generating a reference portrait of the sample object from the first face feature and the first profile feature comprises: performing dimension reduction processing on the first face feature and the first hairstyle feature to obtain a second face feature and a second hairstyle feature of the sample object; and generating a reference portrait of the sample object according to the second face characteristic and the second hairstyle characteristic.
  3. 3. The method of claim 2, wherein the determining the sample portrait feature of the sample object from the reference portrait, the first face feature, and the first profile feature comprises: extracting features of the reference portrait to obtain a first portrait feature corresponding to the reference portrait; performing feature fusion processing on the second face feature, the second hairstyle feature and the first face feature to obtain a second portrait feature of the sample object; and carrying out feature fusion processing on the first portrait feature and the second portrait feature to obtain the sample portrait feature.
  4. 4. The method of claim 2, wherein the adjusting model parameters of the first model based on the sample image, the predicted portrait, the sample portrait, and the reference portrait comprises: Determining a first loss value, a second loss value and a third loss value of the first model according to the predicted portrait and the sample portrait, wherein the first loss value is used for representing a first difference degree between the predicted portrait and the sample portrait, the second loss value is used for representing the reality degree of image information in the predicted portrait, and the third loss value is used for representing a second difference degree of face characteristics in the predicted portrait and the sample portrait; Determining a fourth loss value of the first model according to the sample image and the reference portrait, wherein the fourth loss value is used for representing a third difference degree of facial features in the sample image and the reference portrait; and determining a model loss value of the first model according to the first loss value, the second loss value, the third loss value and the fourth loss value, and adjusting the model parameters based on the model loss value.
  5. 5. The method of claim 1, wherein the first model comprises a first feature extraction module and a second feature extraction module; The feature extraction of the sample image and the sample reference image to obtain a first face feature and a first model feature of the sample object includes: Performing feature extraction on the sample image through the first feature extraction module to obtain a first face feature of the sample image; Performing feature extraction on the sample reference image through the second feature extraction module to obtain the reference image features of the sample object; And processing the first face feature and the reference image feature through the first feature extraction module to obtain the first profile feature.
  6. 6. A portrait creation method, comprising: The method comprises the steps of obtaining an image data set of a first object, wherein the image data set comprises a first image and a first reference image, the first reference image comprises a background area image, a face area image and a hair style area image, and the first image is obtained by removing the hair style area image in the first reference image; The image data set is input into a second model to be subjected to image processing to obtain a portrait of the first object, the image processing comprises the steps of extracting features of the first image and the first reference image to obtain face features and hairstyle features of the first object, determining the portrait features of the first object according to the face features and the hairstyle features, and generating the portrait of the first object according to the portrait features.
  7. 7. The method of claim 6, wherein the second model comprises a first feature extraction module and a second feature extraction module; the feature extraction is performed on the first image and the first reference image to obtain the face feature and the hairstyle feature of the first object, and the portrait feature of the first object is determined according to the face feature and the hairstyle feature, and the method comprises the following steps: Performing feature extraction on the first image through the first feature extraction module to obtain face features of the first object; Performing feature extraction on the first reference image through the second feature extraction module to obtain hairstyle features of the first object; And fusing the facial features and the hairstyle features to obtain the portrait features of the first object.
  8. 8. A model processing apparatus, comprising: The acquisition module is used for acquiring a sample data set of a sample object, wherein the sample data set comprises a sample image, a sample reference image and a sample portrait, the sample reference image comprises a background area image, a face area image and a hair style area image, and the sample image is obtained by removing the hair style area image in the sample reference image; the image processing module is used for inputting the sample data set into a first model to obtain a predicted portrait of the sample object, wherein the image processing comprises the steps of extracting features of the sample image and the sample reference image to obtain a first face feature and a first profile feature of the sample object, generating a reference portrait of the sample object according to the first face feature and the first profile feature, determining the sample portrait feature of the sample object according to the reference portrait, the first face feature and the first profile feature, and generating the predicted portrait according to the sample portrait feature; And the model training module is used for adjusting the model parameters of the first model according to the sample image, the predicted portrait, the sample portrait and the reference portrait to obtain a second model.
  9. 9. A portrait creation device, comprising: The device comprises a second acquisition module, a first acquisition module and a second acquisition module, wherein the second acquisition module is used for acquiring an image data set of a first object, the image data set comprises a first image and a first reference image, the first reference image comprises a background area image, a face area image and a hair style area image, and the first image is obtained by removing the hair style area image in the first reference image; The second processing module is used for inputting the image dataset into a second model to perform image processing to obtain a portrait of the first object, wherein the image processing comprises feature extraction of the first image and the first reference image to obtain face features and hairstyle features of the first object, portrait feature determination of the first object according to the face features and the hairstyle features, and portrait generation of the first object according to the portrait feature.
  10. 10. An electronic device comprising a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being configured to call and execute the computer program from the memory to implement the model processing method of any one of claims 1-5, or the processor being configured to call and execute the computer program from the memory to implement the portrait generation method of any one of claims 6-7.
  11. 11. A computer-readable storage medium storing a computer program executable by a processor to implement the model processing method of any one of claims 1 to 5 or executable by a processor to implement the portrait creation method of any one of claims 6 to 7.
  12. 12. A computer program product comprising a computer program to be executed by a processor to implement the model processing method of any one of claims 1 to 5 or to be executed by a processor to implement the portrait creation method of any one of claims 6 to 7.

Description

Model processing method, portrait generating method and related products Technical Field The application relates to the technical field of artificial intelligence and image processing, in particular to a model processing method, a portrait generating method and related products. Background In terms of image processing, portrait segmentation is always a popular problem. The character and the background are distinguished from each other at the pixel level, so that the method is a classical task of human image segmentation and has wide application. In general, portrait segmentation tasks can be divided into two categories. One is for segmentation of whole body and half-body images, abbreviated as general-purpose image segmentation, and the other is for segmentation of half-body images, abbreviated as portrait segmentation. The portrait segmentation technology is widely deployed on the internet, mobile phones and edge equipment, so that the portrait segmentation also needs to have extremely high reasoning speed on the premise of considering the segmentation accuracy, and how to consider the segmentation accuracy and speed at the complicated and rapidly-transformed portrait edge is still a task which is extremely challenging in the portrait segmentation. Disclosure of Invention The embodiment of the application aims to provide a model processing method, a portrait generating method and related products, which are used for solving the problems of insufficient nature and definition of a portrait caused by hard segmentation edges and low definition of the portrait of a human body in the prior art. In order to solve the technical problems, the embodiment of the application is realized as follows: In one aspect, an embodiment of the present application provides a method for processing a model, including: the method comprises the steps of obtaining a sample data set of a sample object, wherein the sample data set comprises a sample image, a sample reference image and a sample portrait, the sample reference image comprises a background area image, a face area image and a hair style area image, and the sample image is obtained by removing the hair style area image in the sample reference image; The method comprises the steps of inputting a sample data set into a first model to obtain a predicted portrait of a sample object, wherein the image processing comprises the steps of extracting features of the sample image and a sample reference image to obtain a first face feature and a first profile feature of the sample object, generating a reference portrait of the sample object according to the first face feature and the first profile feature, determining the sample portrait feature of the sample object according to the reference portrait, the first face feature and the first profile feature, and generating the predicted portrait according to the sample portrait feature; and adjusting model parameters of the first model according to the sample image, the predicted portrait, the sample portrait and the reference portrait to obtain a second model. In another aspect, an embodiment of the present application provides a portrait generating method, including: The method comprises the steps of obtaining an image data set of a first object, wherein the image data set comprises a first image and a first reference image, the first reference image comprises a background area image, a face area image and a hair style area image, and the first image is obtained by removing the hair style area image in the first reference image; The image data set is input into a second model to be subjected to image processing to obtain a portrait of the first object, the image processing comprises the steps of extracting features of the first image and the first reference image to obtain face features and hairstyle features of the first object, determining the portrait features of the first object according to the face features and the hairstyle features, and generating the portrait of the first object according to the portrait features. In still another aspect, an embodiment of the present application provides a model processing apparatus, including: The acquisition module is used for acquiring a sample data set of a sample object, wherein the sample data set comprises a sample image, a sample reference image and a sample portrait, the sample reference image comprises a background area image, a face area image and a hair style area image, and the sample image is obtained by removing the hair style area image in the sample reference image; the image processing module is used for inputting the sample data set into a first model to obtain a predicted portrait of the sample object, wherein the image processing comprises the steps of extracting features of the sample image and the sample reference image to obtain a first face feature and a first profile feature of the sample object, generating a reference portrait of the sample object according to the firs