EP-4242840-B1 - VIDEO PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
Inventors
- YANG, Shuyun
Dates
- Publication Date
- 20260506
- Application Date
- 20211125
Claims (9)
- A video processing method, comprising: displaying (S10) an initial image, wherein the initial image comprises a first style image, and the first style image is an image that is obtained based on a captured image obtained by capturing a target object; inputting the captured image into a style transfer model to obtain a second style image and outputting the second style image from the style transfer model, wherein the style transfer model is obtained by training a machine learning model through sample images, the sample images comprise an original image and a transfer image, the original image is an image obtained by shooting a sample object, and the transfer image is an image obtained by performing style creation on the sample object; in response to a first triggering event for the target object being detected, displaying (S20) an image switching animation, wherein the image switching animation is used to demonstrate a dynamic process of switching from the initial image to a target image, the target image comprises the second style image, the first style image and the second style image are images of different styles, the first style image is the captured image based on which the first style image is obtained, and the second style image is an image that is obtained by performing style transfer on the captured image based on which the second style image is obtained; and in response to completion of displaying the image switching animation, displaying (S30) the target image, wherein a switching image in the image switching animation comprises a first image area, a second image area, and a third image area, the first image area is located between the second image area and the third image area, and the first image area covers an entire image area of the image switching animation in a time-sharing way through position movement during the dynamic process and undergoes shape change during the position movement, the first image area is used to display a switching material, the second image area is used to display a portion of the initial image, the portion of the initial image is at a position where the second image area is located, the third image area is used to display a portion of the target image, the portion of the target image is at a position where the third image area is located, and the portion of the initial image on the second image area and the portion of the target image on the third image area are obtained by capturing the target object at a same time.
- The method according to claim 1, wherein the first triggering event comprises at least one of: presenting a preset limb action by a target object in the captured image based on which the first style image is obtained being detected, presenting a preset facial action by the target object in the captured image based on which the first style image is obtained being detected, or receiving a preset voice.
- The method according to claim 2, wherein a displacement speed of the position movement of the first image area and a deformation speed of the shape change of the first image area during the dynamic process are determined based on the first triggering event.
- The method according to claim 3, wherein, in a case where the first triggering event comprises presenting a preset limb action by a target object in the captured image based on which the first style image is obtained being detected, the displacement speed of the position movement and the deformation speed of the shape change of the first image area during the dynamic process are determined based on an action range of the preset limb action; in a case where the first triggering event comprises presenting a preset facial action by the target object in the captured image based on which the first style image is obtained being detected, the displacement speed of the position movement and the deformation speed of the shape change of the first image area during the dynamic process are determined based on a deformation amplitude of the preset facial action; and in a case where the first triggering event comprises receiving the preset voice, the displacement speed of the position movement and the deformation speed of the shape change of the first image area during the dynamic process are determined based on at least one of a speed of the preset voice, a volume of the preset voice, or content of the preset voice.
- The method according to any one of claims 1-4, further comprising: in response to a second triggering event occurring during a display process of the image switching animation, controlling (S40) the dynamic process to stop and displaying an image of the image switching animation corresponding to a moment when the dynamic process stops.
- The method according to any one of claims 1-5, wherein the initial image further comprises a first preset image, the first preset image surrounds the first style image; and the target image further comprises a second preset image, and the second preset image surrounds the second style image.
- The method according to any one of claims 1-6, wherein the image switching animation is displayed by performing image rendering on a first canvas layer, a second canvas layer, and a third canvas layer, the second canvas layer is closer to a display side than the first canvas layer, and the third canvas layer is closer to the display side than the second canvas layer; the switching material is rendered on the third canvas layer, the portion of the initial image at the position where the second image area is located is rendered on the second canvas layer, and the portion of the target image at the position where the third image area is located is rendered on the first canvas layer; and areas in the first canvas layer, the second canvas layer, and the third canvas layer that are not rendered and displayed are transparent.
- A video processing apparatus (700), comprising: a display unit (710), configured to display an initial image, wherein the initial image comprises a first style image, and the first style image is an image that is obtained based on a captured image obtained by capturing a target object; and a switching unit (720), configured to input the captured image into a style transfer model to obtain a second style image and output the second style image from the style transfer model, wherein the style transfer model is obtained by training a machine learning model through sample images, the sample images comprise an original image and a transfer image, the original image is an image obtained by shooting a sample object, and the transfer image is an image obtained by performing style creation on the sample object; wherein the switching unit is further configured to display an image switching animation in response to a first triggering event for the target object being detected, wherein the image switching animation is used to demonstrate a dynamic process of switching from the initial image to a target image, the target image comprises the second style image, the first style image and the second style image are images of different styles, the first style image is the captured image based on which the first style image is obtained, and the second style image is an image that is obtained by performing style transfer on the captured image based on which the second style image is obtained, wherein the display unit is further configured to display the target image in response to completion of displaying the image switching animation, a switching image in the image switching animation comprises a first image area, a second image area, and a third image area, the first image area is located between the second image area and the third image area, and the first image area covers an entire image area of the image switching animation in a time-sharing way through position movement during the dynamic process and undergoes shape change during the position movement, the first image area is used to display a switching material, the second image area is used to display a portion of the initial image, the portion of the initial image is at a position where the second image area is located, the third image area is used to display a portion of the target image, and the portion of the target image is at a position where the third image area is located, and the portion of the initial image on the second image area and the portion of the target image on the third image area are obtained by capturing the target object at a same time.
- A computer-readable storage medium, wherein the computer-readable storage medium is configured to store non-volatile computer-readable instructions, and in a case where the non-volatile computer-readable instructions are executed by a computer, the video processing method according to any one of claims 1-7 is implemented.
Description
TECHNICAL FIELD The embodiments of the present disclosure relate to a video processing method and apparatus, and a computer-readable storage medium. BACKGROUND With the rapid development of science technology and economy, video applications have gradually entered people's lives and even become a part of their lives. For example, users can shoot videos anytime, anywhere, and share shot videos on social network sites to share their lives, engage in social interaction, and increase the fun of their lives through videos. US20100045616A1 discloses a method for showing page flip effect when use electronic device enjoy electronic document. CN112764845A discloses a video processing method, a video processing device, electronic equipment and a computer readable storage medium. SUMMARY The summary section is provided to briefly introduce the concepts, and the concepts will be described in detail in the detailed description section later. The summary section is not intended to identify key features or necessary features of the claimed technical solution, nor is intended to limit the scope of the claimed technical solution. The invention is set out in the appended set of claims. BRIEF DESCRIPTION OF THE DRAWINGS In order to more clearly illustrate the technical solution of the embodiments of the present disclosure, the drawings of the embodiments will be briefly described in the following; it is obvious that the described drawings are only related to some embodiments of the present disclosure and thus are not limitative to the present disclosure. FIG. 1A is a flowchart of a video processing method provided by at least one embodiment of the present disclosure;FIG. 1B is a schematic diagram of displaying an initial image through step S10 in FIG. 1A;FIG. 1C is a schematic diagram of a switching image of the image switching animation provided by at least one embodiment of the present disclosure;FIG. 1D is a schematic diagram of a target image provided by at least one embodiment of the present disclosure;FIG. 2 is a flowchart of another video processing method provided by at least one embodiment of the present disclosure;FIG. 3 is a schematic diagram of an image of the image switching animation corresponding to a moment of the dynamic process stopping provided by at least one embodiment of the present disclosure;FIG. 4 is a schematic diagram of rendering layers of the image switching animation provided by at least one embodiment of the present disclosure;FIG. 5 is a schematic diagram of the principle of the first image area undergoing position movement and shape change during the position movement process provided by at least one embodiment of the present disclosure;FIG. 6A is a flowchart of another video processing method provided by at least one embodiment of the present disclosure;FIG. 6B is a schematic diagram of a link for achieving the video processing method provided by at least one embodiment of the present disclosure;FIG. 6C is a flowchart of initialization provided by at least one embodiment of the present disclosure;FIG. 7 is a schematic block diagram of a video processing apparatus provided by at least one embodiment of the present disclosure;FIG. 8A is a schematic block diagram of an electronic device provided by at least one embodiment of the present disclosure;FIG. 8B is a schematic block diagram of another electronic device provided by at least one embodiment of the present disclosure; andFIG. 9 is a schematic diagram of a computer-readable storage medium provided by at least one embodiment of the present disclosure. DETAILED DESCRIPTION In order to make objects, technical schemes and advantages of the embodiments of the present disclosure apparent, the technical solutions of the embodiments will be described in a clearly and fully understandable way in connection with the drawings related to the embodiments of the present disclosure. Apparently, the described embodiments are just a part but not all of the embodiments of the present disclosure. Based on the described embodiments herein, those skilled in the art can obtain other embodiment(s), without any inventive work, which should be within the scope of the present disclosure. It should be understood that the steps described in the method embodiments of the present disclosure may be executed in different orders, and/or executed in parallel. In addition, the method embodiments may include additional steps and/or omit the steps shown. The scope of the present disclosure is not limited in this regard. The terms "comprising" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one further embodiment"; the term "some embodiments" means "at least some embodiments". The relevant definitions of other terms will be given in the following descriptions. It should be noted th