US-12626441-B2 - Virtual image processing based on a face of a detection object method and apparatus, and electronic device, and computer-readable storage medium
Abstract
An image processing method and apparatus, an electronic device, and a computer-readable storage medium are provided. The image processing method includes: in response to having detected a detection object, acquiring current feature information of the detection object; acquiring limit deformation information of the target feature, wherein the limit deformation information is obtained by calculating a target virtual sub-image when the target feature is in at least one limit state; determining movement information of a feature point in an initial virtual image based on the limit deformation information and the current feature information, wherein the initial virtual image is obtained by superimposing a plurality of virtual sub-images; and driving, according to the movement information, the feature point in the initial virtual image to move, so as to generate the current virtual image corresponding to the current state.
Inventors
- Weihong Zeng
- Xu Wang
- Jing Liu
- Shen SANG
- Haishan Liu
Assignees
- LEMON INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20221021
- Priority Date
- 20211025
Claims (18)
- 1 . An image processing method, comprising: in response to having detected a detection object, acquiring current feature information of the detection object, wherein the current feature information is used for indicating a current state of a target feature of the detection object; acquiring limit deformation information of the target feature, wherein the limit deformation information is obtained by calculating a target virtual sub-image when the target feature is in at least one limit state; determining movement information of feature points in an initial virtual image on a basis of the limit deformation information and the current feature information, wherein the initial virtual image is obtained by superimposing a plurality of virtual sub-images, and the plurality of the virtual sub-images comprise a target virtual sub-image corresponding to at least part of the at least one limit state; and driving the feature points in the initial virtual image to move according to the movement information, so as to generate a current virtual image corresponding to the current state, wherein the method further comprises: acquiring depth information of each of the plurality of the virtual sub-images, and obtaining the initial virtual image on a basis of the depth information of each of the plurality of the virtual sub-images and the plurality of the virtual sub-images, wherein the depth information of each of the plurality of the virtual sub-images comprises a depth value of each of the virtual sub-images and a depth value of feature points in each of the virtual sub-images, each of the virtual sub-images corresponds to one of a plurality of features to be virtualized of the detection object, and the plurality of the features to be virtualized comprise the target feature, wherein in a direction perpendicular to a face of the detection object, the depth value of each of the virtual sub-images is proportional to a first distance, and the first distance is a distance between a feature to be virtualized corresponding to the virtual sub-image and eyes of the detection object; and the depth value of the feature points in each of the virtual sub-images is proportional to the first distance.
- 2 . The method according to claim 1 , wherein the acquiring the limit deformation information of the target feature comprises: determining a first limit position and a second limit position of the target feature according to the target virtual sub-image in the at least one limit state; sampling the first limit position and the second limit position so as to obtain a sampling result of a plurality of sampling points; and calculating the sampling result so as to obtain the limit deformation information of the target feature.
- 3 . The method according to claim 2 , wherein the target virtual sub-image in the at least one limit state comprises a first image layer in a first limit state and a second image layer in a second limit state; and the determining the first limit position and the second limit position of the target feature according to the target virtual sub-image in the at least one limit state, comprises: masking alpha channels of the first image layer and the second image layer respectively so as to obtain two mask sub-images, and merging the two mask sub-images into one mask image; and determining the first limit position and the second limit position according to the mask image.
- 4 . The method according to claim 3 , wherein the sampling result comprises position coordinates of each of the plurality of the sampling points respectively at the first limit position and at the second limit position, the calculating the sampling result so as to obtain the limit deformation information of the target feature, comprises: calculating a height difference between the first limit position and the second limit position of each sampling point according to the position coordinates of each sampling point respectively at the first limit position and at the second limit position; obtaining a limit deformation value curve by performing a curve fitting on the plurality of the sampling points according to the height difference; and substituting each of target vertexes in the target feature into the limit deformation value curve so as to obtain a limit deformation value of each of the target vertexes in the target feature, wherein each of the target vertexes is corresponding to at least part of the feature points in an initial virtual sub-image.
- 5 . The method according to claim 4 , wherein the curve fitting comprises a polynomial fitting, and the limit deformation curve comprises a polynomial curve.
- 6 . The method according to claim 3 , wherein the sampling result comprises position coordinates of each sampling point of the plurality of the sampling points respectively at the first limit position and at the second limit position, the calculating the sampling result so as to obtain the limit deformation information of the target feature comprises: calculating a height difference between the first limit position and the second limit position of each sampling point according to the position coordinates of each sampling point respectively at the first limit position and at the second limit position; and using the height difference between the first limit position and the second limit position of each sampling point as the limit deformation information.
- 7 . The method according to claim 1 , wherein the determining the movement information of feature points in the initial virtual image on the basis of the limit deformation information and the current feature information, comprises: determining a current state value of the target feature relative to a reference state according to the current feature information; and determining the movement information of the feature points in the initial virtual image according to the current state value and the limit deformation information.
- 8 . The method according to claim 7 , wherein the determining the current state value of the target feature relative to the reference state according to the current feature information, comprises: acquiring a mapping relationship between feature information and a state value; and determining the current state value of the target feature relative to the reference state according to the mapping relationship and the current feature information.
- 9 . The method according to claim 8 , wherein the acquiring the mapping relationship between the feature information and the state value comprises: acquiring a plurality of samples, wherein each of the samples comprises a corresponding relationship between sample feature information of the target feature and a sample state value; and constructing a mapping function on the basis of the corresponding relationship, wherein the mapping function represents the mapping relationship between the feature information and the state value.
- 10 . The method according to claim 9 , wherein the sample feature information comprises first feature information and second feature information, and the sample state value comprises a first value corresponding to the first feature information and a second value corresponding to the second feature information, the constructing the mapping function on the basis of the corresponding relationship comprises: constructing a system of linear equations; and substituting the first feature information and the first value, and the second feature information and the second value respectively into the system of the linear equations, and solving the system of the linear equations to obtain the mapping function.
- 11 . The method according to claim 7 , wherein the movement information comprises a movement distance, the determining the movement information of the feature points in the initial virtual image according to the current state value and the limit deformation information, comprises: calculating the current state value and the limit deformation information so as to determine the movement distance of the feature points in the initial virtual image.
- 12 . The method according to claim 11 , wherein the calculating the current state value and the limit deformation information so as to determine the movement distance of the feature points in the initial virtual image comprises: multiplying the current state value and the limit deformation information so as to determine the movement distance of the feature points in the initial virtual image.
- 13 . The method according to claim 1 , wherein the movement information comprises a movement distance, the driving the feature points in the initial virtual image to move according to the movement information comprises: driving the feature points in the target virtual sub-image of the initial virtual image to move by the movement distance from a position where an initial state is located to a position where one of at least one limit state different from the initial state is located.
- 14 . The method according to claim 1 , wherein the movement information comprises a target position, the driving the feature points in the initial virtual image to move according to the movement information comprises: driving the feature points in the initial virtual image to move to the target position.
- 15 . The method according to claim 1 , wherein the current feature information comprises comparison information between the target feature and a reference feature of the detection object, wherein the comparison information does not change when a distance of the detection object relative to an image acquisition device used for detecting the detection object changes.
- 16 . The method according to claim 1 , wherein the target feature comprises at least one of eyelashes and a mouth, when the target feature is the eyelashes, a limit state of the target feature is a state of the target feature upon opening the eyes of the detection object, and when the target feature is the mouth, the limit state of the target feature is a state of the mouth upon opening the mouth to a maximum extent.
- 17 . An electronic device, comprising: a processor, and a memory, comprising one or more computer program instructions; wherein the one or more computer program instructions are stored in the memory, and implement an image processing method upon being executed by the processor, wherein the image processing method comprises: in response to having detected a detection object, acquiring current feature information of the detection object, wherein the current feature information is used for indicating a current state of a target feature of the detection object; acquiring limit deformation information of the target feature, wherein the limit deformation information is obtained by calculating a target virtual sub-image when the target feature is in at least one limit state; determining movement information of feature points in an initial virtual image on a basis of the limit deformation information and the current feature information, wherein the initial virtual image is obtained by superimposing a plurality of virtual sub-images, and the plurality of the virtual sub-images comprise a target virtual sub-image corresponding to at least part of the at least one limit state; and driving the feature points in the initial virtual image to move according to the movement information, so as to generate a current virtual image corresponding to the current state, wherein the method further comprises: acquiring depth information of each of the plurality of the virtual sub-images, and obtaining the initial virtual image on a basis of the depth information of each of the plurality of the virtual sub-images and the plurality of the virtual sub-images, wherein the depth information of each of the plurality of the virtual sub-images comprises a depth value of each of the virtual sub-images and a depth value of feature points in each of the virtual sub-images, each of the virtual sub-images corresponds to one of a plurality of features to be virtualized of the detection object, and the plurality of the features to be virtualized comprise the target feature, wherein in a direction perpendicular to a face of the detection object, the depth value of each of the virtual sub-images is proportional to a first distance, and the first distance is a distance between a feature to be virtualized corresponding to the virtual sub-image and eyes of the detection object; and the depth value of the feature points in each of the virtual sub-images is proportional to the first distance.
- 18 . A non-transitory computer-readable storage medium, storing non-temporary computer-readable instructions, wherein the computer-readable instructions implement an image processing method upon being executed by a processor, wherein the image processing method comprises: in response to having detected a detection object, acquiring current feature information of the detection object, wherein the current feature information is used for indicating a current state of a target feature of the detection object; acquiring limit deformation information of the target feature, wherein the limit deformation information is obtained by calculating a target virtual sub-image when the target feature is in at least one limit state; determining movement information of feature points in an initial virtual image on a basis of the limit deformation information and the current feature information, wherein the initial virtual image is obtained by superimposing a plurality of virtual sub-images, and the plurality of the virtual sub-images comprise a target virtual sub-image corresponding to at least part of the at least one limit state; and driving the feature points in the initial virtual image to move according to the movement information, so as to generate a current virtual image corresponding to the current state, wherein the method further comprises: acquiring depth information of each of the plurality of the virtual sub-images, and obtaining the initial virtual image on a basis of the depth information of each of the plurality of the virtual sub-images and the plurality of the virtual sub-images, wherein the depth information of each of the plurality of the virtual sub-images comprises a depth value of each of the virtual sub-images and a depth value of feature points in each of the virtual sub-images, each of the virtual sub-images corresponds to one of a plurality of features to be virtualized of the detection object, and the plurality of the features to be virtualized comprise the target feature, wherein in a direction perpendicular to a face of the detection object, the depth value of each of the virtual sub-images is proportional to a first distance, and the first distance is a distance between a feature to be virtualized corresponding to the virtual sub-image and eyes of the detection object; and the depth value of the feature points in each of the virtual sub-images is proportional to the first distance.
Description
CROSS-REFERENCE TO RELATED APPLICATION The presentation application is a national stage application filed under 35 U.S.C. 371 based on International Patent Application No. PCT/SG2022/050750, filed Oct. 21, 2022, which claims priority to Chinese Patent Application No. 202111241017.2 filed on Oct. 25, 2021, the disclosures of which are incorporated herein by reference in their entireties. TECHNICAL FIELD The embodiments of the present disclosure relate to an image processing method and apparatus, an electronic device and a computer-readable storage medium. BACKGROUND With the rapid development of the Internet, virtual images are widely used in the emerging fields such as live broadcasts, short videos, and games. The application of virtual images not only makes human-computer interaction more interesting but also brings convenience to users. For example, on live broadcast platforms, anchors can use virtual images to broadcast live without showing their faces. SUMMARY At least one of the embodiments of the present disclosure provides an image processing method, which includes: in response to having detected a detection object, acquiring current feature information of the detection object, and the current feature information being used for indicating a current state of a target feature of the detection object; acquiring limit deformation information of the target feature, and the limit deformation information being obtained by calculating a target virtual sub-image when the target feature is in at least one limit state; determining movement information of feature points in an initial virtual image on the basis of the limit deformation information and the current feature information, the initial virtual image being obtained by superimposing a plurality of virtual sub-images, and the plurality of the virtual sub-images including a target virtual sub-image corresponding to at least part of the at least one limit state; and driving the feature points in the initial virtual image to move according to the movement information, so as to generate a current virtual image corresponding to the current state. For example, the image processing method provided by one of the embodiments of the present disclosure, further includes: acquiring depth information of each of the plurality of the virtual sub-images, and obtaining the initial virtual image on the basis of the depth information of each of the plurality of the virtual sub-images and the plurality of the virtual sub-images. For example, in the image processing method provided by one of the embodiments of the present disclosure, acquiring the limit deformation information of the target feature, includes: determining the first limit position and the second limit position of the target feature according to the target virtual sub-image in the at least one limit state; sampling the first limit position and the second limit position so as to obtain a sampling result of the plurality of sampling points; and calculating the sampling result so as to obtain the limit deformation information of the target feature. For example, in the image processing method provided by one of the embodiments of the present disclosure, the target virtual sub-image in the at least one limit state includes the first image layer in the first limit state and the second image layer in the second limit state; and determining the first limit position and the second limit position of the target feature according to the target virtual sub-image in the at least one limit state, including: masking alpha channels of the first image layer and the second image layer respectively so as to obtain two mask sub-images, and merging the two mask sub-images into one mask image; and determining the first limit position and the second limit position according to the mask image. For example, in the image processing method provided by one of the embodiments of the present disclosure, the sampling result includes position coordinates of each of the plurality of the sampling points respectively at the first limit position and at the second limit position, and calculating the sampling result so as to obtain the limit deformation information of the target feature includes: calculating a height difference between the first limit position and the second limit position of each sampling point according to the position coordinates of each sampling point respectively at the first limit position and at the second limit position; obtaining a limit deformation value curve by performing a curve fitting on the plurality of the sampling points according to the height difference; and substituting each of target vertexes in the target feature into the limit deformation value curve so as to obtain a limit deformation value of each of the target vertexes in the target feature, and each of the target vertexes being corresponding to at least part of the feature points in the initial virtual sub-image. For example, in the image processing method