Search

CN-116777854-B - Face image-based dressing evaluation method

CN116777854BCN 116777854 BCN116777854 BCN 116777854BCN-116777854-B

Abstract

The invention relates to a face image-based makeup evaluation method which comprises the steps of obtaining face images, dividing the face images into areas to obtain a plurality of area images, processing skin parts in each area image to obtain color factors, uniformity factors and flaw factors, inputting the color factors, the uniformity factors and the flaw factors of each area image into a decision model to obtain the makeup score of each area image, distributing weights for each area, and obtaining the final makeup score by weighting and summing according to the makeup score of each area image. The invention can objectively and accurately analyze the makeup condition of different subjects.

Inventors

  • SUN HUA
  • JIANG YANWEN
  • JIN GANG
  • QIU YUCHEN

Assignees

  • 上海复硕正态质量技术服务有限公司

Dates

Publication Date
20260508
Application Date
20230612

Claims (5)

  1. 1. A face image-based dressing evaluation method is characterized by comprising the following steps: Acquiring a face image, and carrying out region division on the face image to obtain a plurality of region images; Processing the skin part in each area image to obtain a color class factor, a uniformity factor and a flaw class factor, wherein the color class factor comprises a tone factor, a saturation factor, a brightness factor and an ITA factor, the uniformity factor comprises a contrast value and a gray value standard deviation, the flaw class factor comprises a flaw area, and the processing the skin part in each area image to obtain the color class factor, the uniformity factor and the flaw class factor specifically comprises: Converting the regional image into HSV color space, taking the quartile of each spatial image value, and then using Q3-Q1 as hue factors, saturation factors and brightness factors; Converting the region image into a Lab color space, and obtaining ITA factors by calculating ITA degrees= { arctan (L-50) b }, wherein L and b are parameters of the Lab color space respectively; converting the regional image into a gray level image, carrying out Laplacian transformation on the gray level image by using a Laplacian operator, and taking the variance of the transformed result image as a contrast value; Converting the regional image into a gray level image, taking pixels on diagonal lines of the gray level image, and calculating standard deviation of pixel values of the taken pixels to be used as standard deviation of gray level values; Converting the regional image into a gray level image, carrying out Gaussian blur processing on the gray level image, subtracting the gray level image from the Gaussian blur processed image, and adding 127 to obtain a flaw distribution image, so as to obtain the pixel area of the flaw distribution image as a flaw area; Inputting the color class factors, uniformity factors and flaw class factors of each area image into a decision model to obtain the makeup score of each area image; and (3) assigning weights to the areas, and obtaining a final makeup score by means of weighted summation based on the makeup score of each area image.
  2. 2. The face image-based makeup evaluating method according to claim 1, wherein the face image is divided into a forehead area image, a cheek area image, and a sub-eye area image when the face image is divided into areas.
  3. 3. The face image-based makeup assessment method according to claim 1, wherein the color class factor includes a standard deviation of hue, the uniformity factor includes a standard deviation of texture area and gray value, and the blemish class factor includes a reddish skin area, a pore area, a acne mark area and a mottle area when assessing makeup with makeup.
  4. 4. A face image-based makeup assessment method according to claim 3, wherein the processing is performed on the skin portion in each area image to obtain a color class factor, a uniformity factor and a flaw class factor, which are specifically as follows: converting the region image into an HSV color space, and taking the standard deviation of the H space image value in the HSV color space as the tone standard deviation; Converting the regional image into a gray level image, carrying out Gaussian blur processing on the gray level image, subtracting the gray level image from the Gaussian blur processed image, adding 127 to obtain a flaw distribution image, and processing the flaw distribution image newly in a threshold segmentation mode to obtain a texture image, wherein the pixel area of the texture image is taken as the texture area; Converting the regional image into a gray level image, taking pixels on diagonal lines of the gray level image, and calculating standard deviation of pixel values of the taken pixels to be used as standard deviation of gray level values; Correcting the converted color of the regional image by adopting a two-dimensional color lookup table, performing nonlinear transformation on the image after the color correction, extracting a red part in the image by setting a threshold value, and taking the area of pixels occupied by the red part as the area of the reddish skin; Converting the regional image into a gray level image, identifying a black closed domain in the image by using BlobDetector of OpenCV, and summing pixels of the black closed domain to obtain a pore area; converting the regional image into Lab color space, taking the image of the a channel for high contrast treatment, carrying out binary segmentation in a threshold segmentation mode to obtain an image with a highlighted acne mark, and calculating the acne mark area through a canny operator; Dividing the regional image into RGB channels, using a formula of I R-B I++ I R-G I as a result image, then using a canny operator to calculate and search the color spots, and summing the color spots to obtain the color spot area.
  5. 5. The face image-based makeup assessment method according to claim 1, wherein the decision model is a decision tree model.

Description

Face image-based dressing evaluation method Technical Field The invention relates to the technical field of image processing, in particular to a face image-based makeup evaluating method. Background Along with the improvement of living conditions and the change of thought consciousness, people pay more attention to the external image of the people. Among them, the means for improving the external image include means for improving dressing and make-up. At present, most of evaluations on makeup colors and makeup removing effects of makeup are based on-site evaluations of laboratory estimators, and although the evaluations have unified standards, different estimators still have a certain subjectivity in the process of evaluation, so a complete objective evaluation mode is needed to analyze the makeup conditions of different subjects. Disclosure of Invention The invention aims to solve the technical problem of providing a face image-based makeup evaluation method which can objectively and accurately analyze the makeup conditions of different subjects. The technical scheme adopted by the invention for solving the technical problems is to provide a face image-based makeup evaluating method, which comprises the following steps: Acquiring a face image, and carrying out region division on the face image to obtain a plurality of region images; processing the skin part in each area image to obtain color factors, uniformity factors and flaw factors; Inputting the color class factors, uniformity factors and flaw class factors of each area image into a decision model to obtain the makeup score of each area image; and (3) assigning weights to the areas, and obtaining a final makeup score by means of weighted summation based on the makeup score of each area image. And dividing the facial image into a forehead area image, a cheek area image and a sub-eye area image when the facial image is divided into areas. When the makeup of the makeup is evaluated, the color factors comprise a hue factor, a saturation factor, a brightness factor and an ITA factor, the uniformity factor comprises a contrast value and a gray value standard deviation, and the flaw factor comprises a flaw area. The skin part in each area image is processed to obtain color factors, uniformity factors and flaw factors, and the specific steps are as follows: Converting the regional image into HSV color space, taking the quartile of each spatial image value, and then using Q3-Q1 as hue factors, saturation factors and brightness factors; Converting the region image into an HSV color space, and obtaining ITA factors by calculating ITA degrees= { arctan (L-50) b }, wherein L and b are parameters of the Lab color space respectively; converting the regional image into a gray level image, carrying out Laplacian transformation on the gray level image by using a Laplacian operator, and taking the variance of the transformed result image as a contrast value; Converting the regional image into a gray level image, taking pixels on diagonal lines of the gray level image, and calculating standard deviation of pixel values of the taken pixels to be used as standard deviation of gray level values; Converting the region image into a gray level image, carrying out Gaussian blur processing on the gray level image, subtracting the gray level image from the Gaussian blur processed image, and adding 127 to obtain a flaw distribution image, so as to obtain the pixel area of the flaw distribution image as the flaw area. When the makeup appearance of makeup removal is evaluated, the color factors comprise a tone standard deviation, the uniformity factors comprise a texture area and a gray value standard deviation, and the flaw factors comprise a reddish skin area, a pore area, a acne mark area and a color spot area. The skin part in each area image is processed to obtain color factors, uniformity factors and flaw factors, and the specific steps are as follows: converting the region image into an HSV color space, and taking the standard deviation of the H space image value in the HSV color space as the tone standard deviation; Converting the regional image into a gray level image, carrying out Gaussian blur processing on the gray level image, subtracting the gray level image from the Gaussian blur processed image, adding 127 to obtain a flaw distribution image, and processing the flaw distribution image newly in a threshold segmentation mode to obtain a texture image, wherein the pixel area of the texture image is taken as the texture area; Converting the regional image into a gray level image, taking pixels on diagonal lines of the gray level image, and calculating standard deviation of pixel values of the taken pixels to be used as standard deviation of gray level values; Correcting the converted color of the regional image by adopting a two-dimensional color lookup table, performing nonlinear transformation on the image after the color correction, extracting a red part in the image