CN-116644225-B - Method, device, equipment and storage medium for recommending dressing scheme
Abstract
The embodiment of the application discloses a method, a device, equipment and a storage medium for recommending a dressing scheme, and the related embodiment can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic and the like for improving the accuracy of the dressing scheme recommendation. Responding to a first instruction triggered by a current video playing interface, acquiring an original face image of a target object, acquiring a target character face image matched with the original face image, wherein the target character face image is derived from a character of a first video played by the current video playing interface, matching a plurality of character dressing schemes corresponding to the original face image and the target character face image to obtain dressing matching degree, determining a target character dressing scheme from the plurality of character dressing schemes according to the dressing matching degree, and pushing the target character dressing scheme.
Inventors
- YOU ZIYIN
- Mai Xiuzhi
Assignees
- 腾讯科技(深圳)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20220215
Claims (20)
- 1. The recommendation method of the dressing scheme is characterized by comprising the following steps of: responding to a first instruction triggered by a current video playing interface, and acquiring an original face image of a target object; acquiring a target role face image matched with the original face image, wherein the target role face image is derived from a role of a first video played by the current video playing interface; Matching the original face image with a plurality of character dressing schemes corresponding to the target character face image to obtain dressing matching degree, wherein the dressing matching degree is obtained according to a dressing prediction model; According to the dressing matching degree, determining a target character dressing scheme from the plurality of character dressing schemes, and pushing the target character dressing scheme; The training process of the makeup prediction model comprises the steps of obtaining a basic object face image, a character makeup sample image and character basic attribute characteristics corresponding to characters in the character makeup sample image, wherein the character makeup sample image is any frame face image of each film character extracted from film works, and the character makeup sample image is corresponding to a makeup label; extracting basic object face contour features from the basic object face images, and extracting character makeup features and character face contour features from the makeup sample images; inputting the basic object face outline characteristics, the character basic attribute characteristics, the character makeup characteristics and the character face outline characteristics into a makeup prediction model, and outputting the makeup prediction probability through the makeup prediction model; and updating model parameters of the makeup prediction model through the makeup prediction probability and the makeup label.
- 2. The method of claim 1, wherein the matching the original face image with the plurality of character makeup schemes corresponding to the target character face image to obtain a makeup matching degree comprises: Inputting the original face image into the makeup prediction model, outputting the makeup prediction probability corresponding to each character makeup scheme through the makeup prediction model, and determining the makeup matching degree between the original face image and each character makeup scheme based on the makeup prediction probability.
- 3. The method of claim 1, wherein the target character makeup plan includes makeup course information and makeup product information; the dressing course information comprises a plurality of pieces of dressing step information, and each piece of dressing step information comprises at least one of a use instruction of a dressing product and a dressing effect diagram; the makeup product information comprises base makeup product information and makeup product information, and the makeup product information appears in the makeup course information.
- 4. The method of claim 1, wherein the acquiring the original face image of the target object comprises: performing face scanning on the target object to obtain the original face image, or And identifying the face photo uploaded by the target object to obtain the original face image.
- 5. The method of claim 1, wherein the acquiring a target character face image that matches the original face image comprises: Extracting facial feature information from the original face image, and extracting skin state information from the original face image; According to the facial feature information and the skin state information, calculating the facial matching degree between the original face image and each character face image of the first video played by the current video playing interface; and determining the target role face image from the role face images of the first video according to the face matching degree.
- 6. The method of claim 5, wherein the extracting facial feature information from the original face image comprises: extracting facial features of the original face image to obtain an original facial feature image; Extracting facial feature points from the original facial image, and connecting the extracted facial feature points to obtain a facial contour feature map; superposing the original facial feature map and the facial contour feature map to obtain a target facial feature map; And performing facial feature decomposition processing on the target facial features to obtain facial feature information.
- 7. The method of claim 5, wherein the skin state information includes oiliness, roughness, hemoglobin concentration, and melanin concentration of the facial skin; the extracting skin state information from the original face image comprises the following steps: carrying out intrinsic image decomposition processing on the original face image to obtain a highlight intrinsic layer, a diffuse reflection intrinsic layer and a skin color intrinsic layer; determining the oiliness degree of the face skin according to the duty ratio of the highlight intrinsic layer and the diffuse reflection intrinsic layer; determining the roughness according to the gradient value of the diffuse reflection intrinsic layer; and carrying out pigment concentration recognition on the skin color intrinsic layer to obtain the hemoglobin concentration and the melanin concentration.
- 8. The method of claim 7, wherein after the extracting skin state information from the original face image, the method further comprises: Reading a database, and acquiring a skin report generating template from the database; Acquiring a first skin care product corresponding to the oiliness of the face skin according to the corresponding relation between the preset oiliness and the skin care product, acquiring a second skin care product corresponding to the roughness according to the corresponding relation between the preset roughness and the skin care product data, and acquiring a third skin care product corresponding to the pigment concentration according to the corresponding relation between the preset hemoglobin concentration and the preset melanin concentration and the skin care product data; taking the intersection of the first skin care product, the second skin care product and the third skin care product as a target skin care product set; generating a skin state report corresponding to the target object according to the skin state information and the report generating template; Pushing the skin state report to the target object and the target skin care product set.
- 9. A method according to claim 5, wherein determining a target character makeup plan from the plurality of character makeup plans according to the makeup matching degree, and pushing the target character makeup plan, comprises: Carrying out weighted summation calculation on the face matching degree between the target character face image and the original face image and the face matching degree corresponding to each character dressing scheme to obtain a dressing score corresponding to each character dressing scheme; and screening the plurality of character dressing schemes according to the dressing scores to obtain and push the target character dressing scheme.
- 10. A method according to claim 1, wherein after determining a target character makeup plan from the plurality of character makeup plans according to the makeup matching degree and pushing the target character makeup plan, the method further comprises: Scanning the face of the target object to obtain a current face scanning image; Extracting a dressing area which is currently made up from the current face scanning image based on the no-dressing face image of the target object; comparing the current cosmetic region with a standard cosmetic region of a face image in the target character cosmetic scheme to obtain a comparison result; if the comparison results are consistent, a prompt for carrying out next-stage makeup is sent to the target object; and if the comparison result is inconsistent, sending a prompt that the dressing area which is currently made up is inconsistent with the dressing scheme of the target role to the target object.
- 11. A recommendation device for a cosmetic solution, comprising: the acquisition unit is used for responding to a first instruction triggered by the current video playing interface and acquiring an original face image of the target object; The acquiring unit is further configured to acquire a target role face image matched with the original face image, where the target role face image is derived from a role of the first video played by the current video playing interface; the processing unit is used for matching the original face image with a plurality of character dressing schemes corresponding to the target character face image to obtain dressing matching degree, wherein the dressing matching degree is obtained according to a dressing prediction model; A determining unit, configured to determine a target character dressing scheme from the plurality of character dressing schemes according to the dressing matching degree, and push the target character dressing scheme; The acquisition unit is further used for acquiring a basic object face image, a character dressing sample image and character basic attribute characteristics corresponding to characters in the character dressing sample image, wherein the character dressing sample image is any frame of face image of each film character extracted from a film and television work, and the character dressing sample image corresponds to a dressing label; the processing unit is also used for extracting basic object face contour features from the basic object face image and extracting character makeup features and character face contour features from the makeup sample image; The processing unit is also used for inputting the basic object face outline characteristics, the character basic attribute characteristics, the character dressing characteristics and the character face outline characteristics into a dressing prediction model, and outputting dressing prediction probability through the dressing prediction model; and the processing unit is also used for updating model parameters of the makeup prediction model through the makeup prediction probability and the makeup label.
- 12. The device according to claim 11, wherein the processing unit is specifically configured to input the original face image into the makeup prediction model, output a makeup prediction probability corresponding to each character makeup scheme through the makeup prediction model, and determine a makeup matching degree between the original face image and each character makeup scheme based on the makeup prediction probability.
- 13. The apparatus of claim 11, wherein the target character make-up scheme comprises make-up course information and make-up product information, the make-up course information comprising a plurality of make-up step information, each make-up step information comprising at least one of instructions for use of the make-up product and a make-up effect map; the makeup product information comprises base makeup product information and makeup product information, and the makeup product information appears in the makeup course information.
- 14. The apparatus according to claim 11, wherein the acquisition unit is specifically configured to: performing face scanning on the target object to obtain the original face image, or And identifying the face photo uploaded by the target object to obtain the original face image.
- 15. The apparatus according to claim 11, wherein the acquisition unit is specifically configured to: Extracting facial feature information from the original face image, and extracting skin state information from the original face image; According to the facial feature information and the skin state information, calculating the facial matching degree between the original face image and each character face image of the first video played by the current video playing interface; and determining the target role face image from the role face images of the first video according to the face matching degree.
- 16. The apparatus according to claim 15, wherein the acquisition unit is specifically configured to: extracting facial features of the original face image to obtain an original facial feature image; Extracting facial feature points from the original facial image, and connecting the extracted facial feature points to obtain a facial contour feature map; superposing the original facial feature map and the facial contour feature map to obtain a target facial feature map; And performing facial feature decomposition processing on the target facial features to obtain facial feature information.
- 17. The apparatus according to claim 15, wherein the skin status information comprises oiliness, roughness, hemoglobin concentration and melanin concentration of the skin of the face, the obtaining unit being specifically adapted to: carrying out intrinsic image decomposition processing on the original face image to obtain a highlight intrinsic layer, a diffuse reflection intrinsic layer and a skin color intrinsic layer; determining the oiliness degree of the face skin according to the duty ratio of the highlight intrinsic layer and the diffuse reflection intrinsic layer; determining the roughness according to the gradient value of the diffuse reflection intrinsic layer; and carrying out pigment concentration recognition on the skin color intrinsic layer to obtain the hemoglobin concentration and the melanin concentration.
- 18. The apparatus of claim 17, wherein the obtaining unit is further configured to read a database in which skin report generating templates are obtained; the acquisition unit is further used for acquiring a first skin care product corresponding to the oiliness of the face skin according to the corresponding relation between the preset oiliness and the skin care product, acquiring a second skin care product corresponding to the roughness according to the corresponding relation between the preset roughness and the skin care product data, and acquiring a third skin care product corresponding to the pigment concentration according to the corresponding relation between the preset hemoglobin concentration and the preset melanin concentration and the skin care product data; the processing unit is further used for taking the intersection of the first skin care product, the second skin care product and the third skin care product as a target skin care product set; the processing unit is further used for generating a skin state report corresponding to the target object according to the skin state information and the report generation template; The processing unit is further configured to push the skin status report and the target skin care product set to the target object.
- 19. The apparatus according to claim 15, wherein the determining unit is specifically adapted to: Carrying out weighted summation calculation on the face matching degree between the target character face image and the original face image and the face matching degree corresponding to each character dressing scheme to obtain a dressing score corresponding to each character dressing scheme; and screening the plurality of character dressing schemes according to the dressing scores to obtain and push the target character dressing scheme.
- 20. The apparatus according to claim 11, wherein the acquiring unit is further configured to scan a face of the target object to acquire a current face scan image; The acquisition unit is further used for extracting a dressing area which is currently made up from the current face scanning image based on the non-dressing face image of the target object; The processing unit is also used for comparing the dressing area which is currently made up with the standard dressing area of the face image in the target character dressing scheme to obtain a comparison result; The processing unit is further used for sending a prompt for carrying out next-stage makeup to the target object if the comparison results are consistent; And the processing unit is also used for sending a prompt that the dressing area which is currently made up is inconsistent with the dressing scheme of the target role to the target object if the comparison result is inconsistent.
Description
Method, device, equipment and storage medium for recommending dressing scheme Technical Field The embodiment of the application relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for recommending a dressing scheme. Background With the increasing importance of image factors and the rising of economical value, people have more and more strong cosmetic and skin care demands, the cosmetic products have various patterns, the cosmetic courses are also endless, and the cosmetic consumers have more and more difficult to select. At present, a shared document or video course of a makeup process is visible everywhere on a network platform, so that people can complete the whole makeup process according to the shared operation steps in the document or video course. However, since the cosmetics on the market are various at present, the existing cosmetics of the cosmetics person are not consistent with the cosmetics contained in the cosmetics course, even though the colors and the usage are the same, the expected effect cannot be achieved, and many beginners lack professional knowledge and professional techniques of cosmetics, when facing popular skin care suggestions, beauty products with eye-splice, and dressing courses with too homogeneous contents, the beginners are not aware of the best and convenient, and are urgently required to meet the personalized dressing demands by adapting to the dressing of themselves. Disclosure of Invention The embodiment of the application provides a method, a device, equipment and a storage medium for recommending a dressing scheme, which are used for digitally expressing the adaptation degree between a target object and each character dressing scheme through the dressing matching degree between a plurality of character dressing schemes corresponding to an original face image and a target character face image, so that the target character dressing scheme with film and television characteristics adapted to the target object can be acquired more accurately. In one aspect, the embodiment of the application provides a recommendation method of a makeup plan, which comprises the following steps: responding to a first instruction triggered by a current video playing interface, and acquiring an original face image of a target object; acquiring a target role face image matched with an original face image, wherein the target role face image is derived from a role of a first video played by a current video playing interface; Matching a plurality of character dressing schemes corresponding to the original face image and the target character face image to obtain dressing matching degree; And determining a target character dressing scheme from the plurality of character dressing schemes according to the dressing matching degree, and pushing the target character dressing scheme. Another aspect of the present application provides a recommendation device for a cosmetic solution, including: the acquisition unit is used for responding to a first instruction triggered by the current video playing interface and acquiring an original face image of the target object; The acquisition unit is also used for acquiring a target role face image matched with the original face image, wherein the target role face image is derived from the role of the first video played by the current video playing interface; the processing unit is used for matching the original face image with a plurality of character dressing schemes corresponding to the face image of the target character to obtain dressing matching degree; And the determining unit is used for determining a target character dressing scheme from a plurality of character dressing schemes according to the dressing matching degree and pushing the target character dressing scheme. In one possible design, in one implementation of another aspect of the embodiments of the present application, The device comprises an acquisition unit, a character dressing sample image and a character basic attribute feature, wherein the character basic attribute feature is corresponding to a character in the character dressing sample image, the character dressing sample image is any frame of face image of each film character extracted from a film and television work, and the character dressing sample image corresponds to a dressing label; The processing unit is also used for extracting basic object face contour features from the basic object face images and extracting character dressing features and character face contour features from the dressing sample images; The processing unit is also used for inputting the basic object face outline characteristics, the character basic attribute characteristics, the character dressing characteristics and the character face outline characteristics into the dressing prediction model and outputting dressing prediction probability through the dressing prediction model; the processing