EP-4742165-A1 - GENERATION METHOD AND GENERATION APPARATUS FOR ANIMATION, AND ELECTRONIC DEVICE
Abstract
This application provides a generation method and generation apparatus for an animation, and an electronic device, and relates to the field of image processing technologies. The generation method includes: The electronic device obtains a first image under an operation of a user and performs content understanding on the first image, then determines a first camera movement direction based on display content of the first image, and subsequently generates an animation based on the first camera movement direction and the first image. The first camera movement direction is a camera movement direction corresponding to the animation. That is, the animation displays the first image to the user based on the first camera movement direction. In this application, the camera movement direction can be determined based on the display content of the image, and camera movements can be flexibly utilized based on the image content, so that a sense of image and immersion of the animation is enhanced and a high-quality animation with harmonious camera movements can be generated, thereby improving display effect of the animation.
Inventors
- JIANG, Wenming
- CAO, YUAN
- ZHANG, CHAO
- JU, Ran
- CHEN, YUGANG
- WU, Guoxing
- YANG, Ouya
Assignees
- Huawei Technologies Co., Ltd.
Dates
- Publication Date
- 20260513
- Application Date
- 20240708
Claims (20)
- A generation method for an animation, comprising: obtaining a first image; determining a first camera movement direction based on display content of the first image; and generating the animation based on the first camera movement direction and the first image, wherein the first camera movement direction is a camera movement direction corresponding to the animation.
- The generation method according to claim 1, wherein determining the first camera movement direction based on the display content of the first image comprises: recognizing the display content of the first image to obtain a first pointing direction of a target object; and determining the first camera movement direction based on the first pointing direction.
- The generation method according to claim 2, wherein before recognizing the display content of the first image to obtain the first pointing direction of the target object, the method further comprises: recognizing the display content of the first image to obtain a plurality of objects; and determining the target object based on the plurality of objects.
- The generation method according to claim 3, wherein determining the target object based on the plurality of objects comprises: determining a photographed subject in the plurality of objects as the target object; or determining the target object from the plurality of objects based on preset priority information; or determining the target object from the plurality of objects based on proportions of an image occupied by the plurality of objects; or randomly selecting the plurality of objects to obtain the target object.
- The generation method according to any one of claims 2 to 4, wherein determining the first camera movement direction based on the first pointing direction comprises: determining a direction along the first pointing direction as the first camera movement direction; or determining a direction opposite to the first pointing direction as the first camera movement direction.
- The generation method according to any one of claims 2 to 5, wherein the first pointing direction comprises a face orientation, a gaze direction, a movement direction, or an extension direction.
- The generation method according to any one of claims 2 to 6, wherein the first pointing direction comprises at least one of front-to-back, back-to-front, left-to-right, right-to-left, top-to-bottom, and bottom-to-top.
- The generation method according to any one of claims 1 to 7, further comprising: obtaining a depth map of the first image; and generating the animation based on the first camera movement direction and the first image comprises: generating the animation based on the first camera movement direction, the depth map, and the first image.
- The generation method according to any one of claims 1 to 8, further comprising: displaying the animation.
- The generation method according to any one of claims 1 to 9, wherein the animation is displayed as a dynamic wallpaper.
- The generation method according to any one of claims 1 to 10, wherein the animation is a video or a GIF image.
- A generation apparatus for an animation, comprising: an obtaining unit, configured to obtain a first image; a determining unit, configured to determine a first camera movement direction based on display content of the first image; and a generation unit, configured to generate the animation based on the first camera movement direction and the first image, wherein the first camera movement direction is a camera movement direction corresponding to the animation.
- The generation apparatus according to claim 12, wherein the determining unit is specifically configured to: recognize the display content of the first image to obtain a first pointing direction of a target object; and determine the first camera movement direction based on the first pointing direction.
- The generation apparatus according to claim 13, wherein before recognizing the display content of the first image to obtain the first pointing direction of the target object, the determining unit is specifically configured to: recognize the display content of the first image to obtain a plurality of objects; and determine the target object based on the plurality of objects.
- The generation apparatus according to claim 14, wherein the determining unit is specifically configured to: determine a photographed subject in the plurality of objects as the target object; or determine the target object from the plurality of objects based on preset priority information; or determine the target object from the plurality of objects based on proportions of an image occupied by the plurality of objects; or randomly select the plurality of objects to obtain the target object.
- The generation apparatus according to any one of claims 13 to 15, wherein the determining unit is specifically configured to: determine a direction along the first pointing direction as the first camera movement direction; or determine a direction opposite to the first pointing direction as the first camera movement direction.
- The generation apparatus according to any one of claims 13 to 16, wherein the first pointing direction comprises a face orientation, a gaze direction, a movement direction, or an extension direction.
- The generation apparatus according to any one of claims 13 to 17, wherein the first pointing direction comprises at least one of front-to-back, back-to-front, left-to-right, right-to-left, top-to-bottom, and bottom-to-top.
- The generation apparatus according to any one of claims 12 to 18, wherein the obtaining unit is further configured to: obtain a depth map of the first image; and the generation unit is specifically configured to: generate the animation based on the first camera movement direction, the depth map, and the first image.
- The generation apparatus according to any one of claims 12 to 19, wherein the generation apparatus further comprises: a display unit, configured to display the animation.
Description
This application claims priority to Chinese Patent Application No. 202311435092.1, filed with the China National Intellectual Property Administration on October 30, 2023 and entitled "GENERATION METHOD AND GENERATION APPARATUS FOR ANIMATION, AND ELECTRONIC DEVICE", which is incorporated herein by reference in its entirety. TECHNICAL FIELD This application relates to the field of image processing technologies, and in particular, to a generation method and generation apparatus for an animation, and an electronic device. BACKGROUND With continuous development of terminal technologies, an increasing number of users start to use images to record their lives. For some satisfactory images, the user also wants to make an animation for playing and displaying. To meet this demand, a technology of generating an animation from an image has emerged. The user may combine a plurality of images from a local gallery, stitch the images together into an animation, and then share the animation to a social platform. The animation may also be used as a dynamic wallpaper for a desktop or a screensaver, thereby expanding sources of dynamic wallpapers on a terminal. When a terminal device generates an animation from an image, camera movements may be applied to the image for display, to improve image display effect in the animation. For example, the image in the animation may be displayed from left to right, from top to bottom, or with zooming effect. At present, when the terminal device generates the animation with camera movement display effect, the terminal device may process the image in the animation based on a preset camera movement direction. For example, if the preset camera movement direction is from left to right, the image is displayed in the generated animation in accordance with the camera movement direction from left to right. However, a plurality of images are usually added to the animation. In a conventional technology, the plurality of images can be displayed only in a fixed camera movement direction, which is not flexible enough in the use of camera movements, resulting in poor display effect of the animation. SUMMARY Embodiments of this application provide a generation method and generation apparatus for an animation, and an electronic device, so that a camera movement direction can be determined based on display content of an image, and a high-quality animation with harmonious camera movements can be generated, thereby improving display effect of the animation. According to a first aspect, a generation method for an animation is provided, including: obtaining a first image; determining a first camera movement direction based on display content of the first image; and generating the animation based on the first camera movement direction and the first image, where the first camera movement direction is a camera movement direction corresponding to the animation. According to the generation method for an animation provided in this embodiment of this application, a camera movement direction (denoted as a first camera movement direction) may be first determined based on display content of an image (denoted as a first image), and then the animation is generated based on the first camera movement direction and the first image. That is, the first image is displayed based on the first camera movement direction, thereby generating the animation. Because the camera movement direction is determined based on the display content of the image, the camera movement direction can be associated and matched with the display content of the image, and camera movements can be flexibly utilized based on the image content, so that a sense of image and immersion of the animation is enhanced and a high-quality animation with harmonious camera movements can be generated, thereby improving display effect of the animation. Because association and matching is implemented between the camera movement direction and the display content of the image and use of the camera movements is more harmonious and smoother, the generated animation has higher quality and a stronger sense of image. This improves display effect of the animation, thereby providing the user with coherent immersive experience, and improving visual experience of the user. Optionally, the first image may be obtained by an electronic device through photographing, or may be from the internet (for example, a social platform). Optionally, the first image may alternatively be obtained from another animation. The another animation may be, for example, a GIF image or a video. For example, the first image may be obtained by taking a screenshot of a video or by extracting a frame from a video. A specific source of the first image is not limited in this application. Optionally, a specific manner of "determining the first camera movement direction based on the image content" is not limited in this embodiment of this application. For example, a photographed subject in the first image may be recog