CN-120455755-B - Video generation method and electronic equipment
Abstract
The application relates to the technical field of terminals, and provides a video generation method and electronic equipment. The video generation method comprises the steps of obtaining video description information, wherein the video description information is used for describing a scene of a target video and/or actions of digital images in the target video, obtaining target scene materials and target animation materials in a preset material set based on the video description information, wherein the target scene materials comprise scene image materials, the scene image materials comprise foreground image materials and/or background image materials, the target animation materials comprise image animations of the digital images moving based on motion data, and combining the target scene materials and the target animation materials to obtain the target video. The technical scheme of the application can solve the problems that the screen locking interface based on the digital image can not realize the dynamic effect and the change of a single element in the screen locking interface can not be realized.
Inventors
- GAO XIN
Assignees
- 荣耀终端股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20240906
Claims (20)
- 1. A video generation method, applied to an electronic device, the method comprising: Displaying a screen locking video, and displaying a digital image and a first scene in the screen locking video, wherein the digital image is displayed as a dynamic effect, the first scene comprises at least one first element, and at least one target element in the first element is changed into a second element when the first scene is changed, and the first element and the second element are different; The digital image is generated based on a target animation material, the digital image is displayed as the dynamic effect based on the change of the target animation material, the first scene is generated based on a target scene material and is changed based on the change of the target scene material, the target animation material comprises an image animation in which the digital image moves based on motion data, the target scene material comprises a scene image material, the scene image material comprises a foreground image material and/or a background image material, the display screen locking video specifically comprises the steps of acquiring video description information, acquiring the target animation material and the target scene material in a preset material set based on the video description information, combining the target scene material and the target animation material to obtain a target video, and setting the target video as the screen locking video after the target video is obtained, wherein the video description information is used for describing the scene of the screen locking video and/or the action of the digital image in the screen locking video, the video description information is determined based on current scene information, and the current scene information comprises at least one of current user state, current user information and at least one date of user input information.
- 2. The method of claim 1, wherein before the acquiring the target scene material and the target animation material in the preset material set, the method further comprises: And constructing the material set, wherein the materials in the material set comprise at least one of the avatar animation and the scene image materials.
- 3. The method of claim 2, wherein said constructing said material set comprises: Acquiring at least one material and a material description tag corresponding to the material, wherein the material description tag is used for identifying a scene or action corresponding to the material; And constructing the material set based on the corresponding relation between the materials and the material description labels.
- 4. The method of claim 3, wherein the obtaining at least one material and a material description tag corresponding to the material comprises: Generating at least one scene image material according to a preset scene tag, and taking the scene tag as the material description tag corresponding to the scene image material, wherein the scene tag is used for identifying a scene corresponding to the scene image material; Generating at least one image animation according to a preset animation label, and taking the animation label as the material description label corresponding to the image animation, wherein the animation label is used for identifying actions or action occurrence scenes in the image animation.
- 5. The method of claim 4, wherein generating at least one of the scene image materials according to a preset scene tag comprises: Inputting at least one scene description text corresponding to the scene tag into a semantic model to obtain scene description features corresponding to the scene description text, wherein the scene description text is used for describing the scene identified by the scene tag; and inputting the scene description features into an image generation model to obtain the scene image materials corresponding to the scene description features.
- 6. The method of claim 5, wherein before the entering of the at least one scene description text corresponding to the scene tag into the semantic model, the method further comprises: setting at least one scene tag for each scene category respectively; At least one scene description text is determined for each of the scene tags.
- 7. The method of claim 4, wherein after generating at least one of the scene image materials according to a preset scene tag, the method further comprises: And eliminating the scene image material under the condition that the scene image material is not matched with a preset screening rule, wherein the screening rule comprises that the scene image material is matched with the scene tag, and/or the drawing naturalness of the scene image material reaches a preset condition.
- 8. The method of claim 4, wherein generating at least one avatar animation according to a preset animation tag, comprises: Designing an action sequence according to at least one animation description text corresponding to the animation tag, wherein the animation description text is used for describing the action or the action occurrence scene identified by the animation tag; Capturing the motion data generated in the motion process of a user according to the motion sequence; And redirecting the motion data to the digital image, and adjusting the redirected motion data to obtain the image animation.
- 9. The method of claim 8, wherein prior to designing the sequence of actions from the at least one animated descriptive text for which the animated label corresponds, the method further comprises: Setting at least one animation tag corresponding to an action category, wherein the animation tag corresponding to the action category is used for identifying the action in the avatar animation, and/or, Setting at least one animation tag corresponding to an action occurrence scene category, wherein the animation tag corresponding to the action occurrence scene category is used for identifying the action occurrence scene corresponding to the image animation; at least one animation description text is determined for each animation tag.
- 10. The method of claim 9, wherein said determining at least one of the animation descriptions text to which the animation tag corresponds comprises: determining a role corresponding to the animation tag, wherein the role is a person role or an animal role; And under the condition that a plurality of roles are provided, determining interaction actions among the plurality of roles, and determining the animation description text corresponding to the animation tag according to the interaction actions.
- 11. A method according to claim 3, wherein after said building said material set, the method further comprises: Adding the material information of each material to the material set, wherein the material information comprises at least one of a material description label, a material description text, an index, a default material identification bit, extension information and a material version number corresponding to the material; the material description text corresponding to the scene image material is a scene description text, and the material description text corresponding to the avatar animation is an animation description text.
- 12. The method of claim 3, wherein after constructing the material set based on the correspondence between the material and the material description tag, the method further comprises: constructing at least one material subset based on the scene image material and the avatar animation in the material set; and constructing a mapping relation table based on the corresponding relation between each material subset and the material description label.
- 13. The method of claim 12, wherein the constructing a mapping table based on the correspondence between each of the material subsets and the material description tags, respectively, comprises: for each material subset, determining the corresponding relation between the material subset and the material description label according to the corresponding relation between each material in the material subset and the material description label; And constructing the mapping relation table according to the corresponding relation between the material subset and the material description label.
- 14. The method according to claim 1, wherein the obtaining the target scene material and the target animation material in the preset material set includes: determining a material description tag corresponding to the video description information; and obtaining the target scene material and the target animation material corresponding to the material description tag from the material set based on a pre-constructed mapping relation table.
- 15. The method according to claim 14, wherein the obtaining the target scene material and the target animation material corresponding to the material description tag from the material set based on the mapping relation table includes: Determining a material subset corresponding to the material description tag as a target material subset based on the mapping relation table; And determining the target scene material and the target animation material in the target material subset.
- 16. The method of claim 15, wherein said determining the target scene material and the target animation material in the target material subset comprises: acquiring a first target index and a second target index, wherein the first target index is used for distinguishing different scene image materials corresponding to the same scene tag, and the second target index is used for distinguishing different image animations corresponding to the same animation tag; And in the material subset, determining the scene image material corresponding to the first target index as the target scene material, and determining the figure animation corresponding to the second target index as the target animation material.
- 17. The method of claim 1, wherein the material set comprises a scene image material set and an animation material set, and wherein the obtaining the target scene material and the target animation material in the preset material set comprises: and acquiring the target scene material in the scene image material set, and acquiring the target animation material in the animation material set.
- 18. The method of claim 1, wherein the combining the target scene material and the target animation material to obtain the target video comprises: sequentially superposing the target scene material and the target animation material according to a preset sequence to obtain the target video; And respectively rendering each frame of image in the target video according to the illumination change information and/or the mirror track information to obtain the rendered target video.
- 19. The method of claim 1, wherein before the acquiring the target scene material and the target animation material in the preset material set, the method further comprises: Responding to a video generation instruction, and acquiring the digital image; The digital avatar is edited in response to an editing instruction.
- 20. An electronic device comprising a memory and a processor, the memory and the processor coupled, the memory to store computer program code, the computer program code comprising computer instructions that, when executed by the processor, cause the electronic device to perform the video generation method of any of claims 1-19.
Description
Video generation method and electronic equipment Technical Field The present application relates to the field of terminal technologies, and in particular, to a video generating method and an electronic device. Background With the development of technology, popularity of electronic devices has increased, and importance of security of electronic devices has also increased. Screen locking is an important function that can improve the security of electronic devices and can be used to block unauthorized access. Specifically, when the electronic equipment is in a screen locking state, a screen locking interface is displayed on a screen of the electronic equipment, wherein the screen locking interface comprises screen locking pictures, date, time and other basic information. The user can perform authority verification by inputting passwords, patterns, fingerprints or facial recognition and the like. Only if the authority verification is passed, the user can access the content or the function in the electronic equipment, otherwise, the screen always keeps the screen locking interface. The screen locking interface can be a natural landscape screen locking interface, an abstract art screen locking interface, a digital image screen locking interface and the like. The digital image lock screen interface comprises the digital images of cartoon or cartoon style, the digital images are usually colorful and lovely, and pleasant visual experience can be provided. However, a lock screen interface based on a digital avatar generally displays basic information such as date, time, etc. on a still picture containing the digital avatar. The static picture causes the fixed content of the screen locking interface, and the dynamic effect cannot be realized, so the display effect is poor. In addition, when editing elements in the screen locking interface based on the cartoon digital image, only the whole static picture can be replaced, and the change of single elements in the screen locking interface can not be realized, so that the personalized setting of the screen locking interface is not facilitated. Disclosure of Invention The application provides a video generation method and electronic equipment, which are used for solving the problems that a screen locking interface based on a digital image cannot realize a dynamic effect and a single element in the screen locking interface cannot be changed. To achieve the above object, in a first aspect, an embodiment of the present application provides a video generating method, including: The method comprises the steps of obtaining video description information, wherein the video description information is used for describing a scene of a target video and/or actions of digital images in the target video, obtaining target scene materials and target animation materials in a preset material set based on the video description information, wherein a first target scene material comprises scene image materials, the scene image materials comprise foreground image materials and/or background image materials, the target animation materials comprise image animations of the digital images moving based on motion data, and combining the target scene materials and the target animation materials to obtain the target video. According to the video generation method, firstly, video description information for describing a screen locking video is obtained, then, based on the video description information, foreground image materials, background image materials and image animations comprising digital images are selected from a preset material set comprising a plurality of materials, and the foreground image materials, the background image materials and the image animations comprising digital images are combined to generate a target video. Because the video generated by the application adopts the image animation of the digital image movement, the dynamic effect based on the digital image can be realized. In addition, the application realizes decoupling assembly of each material in the video, and a target video is obtained by combining a plurality of materials. Therefore, when a certain element in the lock screen interface needs to be changed, only the material corresponding to the element needs to be changed. Through the user-defined change of any material, the personalized setting of the screen locking interface is facilitated, and the playability and the interestingness are enhanced. In one implementation, before the target scene material and the target animation material are acquired in the preset material set, the method further comprises the step of constructing the material set, wherein the materials in the material set comprise at least one of a visual animation and a scene image material. By adopting the implementation mode, a plurality of different materials can be generated in advance, and a material set is constructed. When the video is obtained by combining the materials, the pre-generated materials