Search

WO-2026091968-A1 - INTERACTION PROCESSING METHODS AND APPARATUSES, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

WO2026091968A1WO 2026091968 A1WO2026091968 A1WO 2026091968A1WO-2026091968-A1

Abstract

Provided in the present application are interaction processing methods and apparatuses, an electronic device, a computer-readable storage medium, and a computer program product. A method comprises: displaying first information in an information flow interface, the first information comprising a first interaction control; in response to a first trigger operation for the first interaction control, displaying a first interaction interface, the first interaction interface being used for editing a special-effect animation; in response to a first editing operation in the first interaction interface, displaying a first special-effect animation formed by the editing; and in response to a second trigger operation for the first special-effect animation, playing back the first special-effect animation.

Inventors

  • LI, Shuyuan
  • FENG, Qiyao
  • SHI, Peng
  • ZHENG, Ziyue
  • LIU, JIA
  • YAO, Jizhuo
  • SUN, Sifan
  • LIU, Yongwen
  • LUO, Guangzheng
  • XIE, ZIhao
  • HOU, Yitao
  • CAI, Qisi

Assignees

  • 腾讯科技(深圳)有限公司

Dates

Publication Date
20260507
Application Date
20250919
Priority Date
20241030

Claims (20)

  1. An interactive processing method, performed by an electronic device, the method comprising: The first information is displayed in the information flow interface, wherein the first information includes a first interactive control; In response to a first trigger operation on the first interactive control, a first interactive interface is displayed, wherein the first interactive interface is used to edit special effects animation; In response to a first editing operation in the first interactive interface, a first special effects animation generated by the editing is displayed; In response to a second triggering operation on the first special effects animation, the first special effects animation is played.
  2. According to claim 1, the method further comprises, before displaying the first interactive interface: The second special effects animation is displayed in the first information, wherein the second special effects animation is different from the first special effects animation, and the second special effects animation is any of the following types of special effects animation: special effects animation pre-configured for the first information, special effects animation edited by a second object, and the second object is an object different from the first object browsing the information flow interface.
  3. According to the method of claim 2, wherein displaying the second special effects animation in the first information includes: When the first information is information published by the second object, the special effects animation edited by the second object is displayed in the first information; When the first information is not information published by the second object, the pre-configured special effects animation is displayed in the first information.
  4. According to the method of claim 2, wherein displaying the second special effects animation in the first information includes: Display the second special effects animation in any of the following ways: Display at least one of the second special effects animations, wherein when there are multiple second special effects animations, at least some of the second special effects animations are played in an overlay manner, and the key elements of each second special effects animation are in a visible state; Multiple second special effects animations are displayed one by one, wherein the content of each second special effects animation is different, and the animations are ordered according to at least one of the following factors: the order of publication, the user's interest, the relevance to the first information, and the frequency of interaction. Display one of the second special effects animations and a second interactive control, and switch to displaying another of the second special effects animations in response to a trigger operation on the second interactive control.
  5. According to the method of claim 4, wherein at least a portion of the second special effects animation played in an overlay manner is presented in any of the following ways: The second special effect animation located in the upper layer has transparency, and the second special effect animation located in the lower layer is revealed through the upper layer; Another second special effects animation is embedded within a frame of a second special effects animation, wherein the area of the other second special effects animation is smaller than the area of the second special effects animation. Multiple second-level special effects animations can be dynamically switched or merged according to specific rules.
  6. The method according to any one of claims 1 to 5, wherein the type of the first editing operation includes: a selection operation; The response to a first editing operation in the first interactive interface, displaying the first special effects animation formed by the editing, includes: Multiple first animation templates are displayed in the first interactive interface; In response to a selection operation for any of the first animation templates, a first special effects animation generated based on a target animation template is displayed, wherein the target animation template is the selected first animation template.
  7. According to the method of claim 6, the first information is promotional information, and each first animation template includes recommended materials related to the promotional information; The display of the first special effects animation generated based on the target animation template includes: Displays a first special effects animation containing the recommended materials, generated based on the target animation template.
  8. According to the method of claim 6 or 7, the method further comprises, before the first special effects animation formed by the display editing, that: The first animation template is determined by at least one of the following methods: The second animation template selected by the second account is used as the first animation template, wherein the second account is an account that has a social relationship with the first account that browses the information flow interface; The animation template that is selected the most times will be used as the first animation template; Determine the first similarity between the information features of the first information and the semantic features of the candidate animation template, and take the candidate animation template with the highest first similarity as the first animation template; Determine the second similarity between the first account feature of the first account and the semantic feature of the candidate animation template, and use the candidate animation template with the highest second similarity as the first animation template.
  9. The method according to any one of claims 1 to 8, wherein, prior to the display editing of the first special effects animation, the method further comprises: In response to an input operation on the first interactive interface, the first prompt word that has been entered is displayed; Based on the first prompt word, a reference image is searched, and based on the reference image and the first prompt word, a pre-trained text-to-image model is invoked to generate multiple first frame images; The plurality of first frame images are arranged according to the generation order, and the first special effects animation is generated based on the arranged plurality of first frame images.
  10. The method according to any one of claims 1 to 8, wherein, prior to the display editing of the first special effects animation, the method further comprises: In response to a selection operation on information in the information flow interface, key elements are extracted from the selected information; Based on the aforementioned key elements, a pre-trained text generation model is invoked to perform text generation processing, thereby obtaining the second prompt word; Based on the second prompt word, a pre-trained text-to-image model is invoked to generate images, resulting in multiple second-frame images; The plurality of second frame images are arranged according to the generation order, and the first special effects animation is generated based on the arranged plurality of second frame images.
  11. According to the method of claim 10, the key element includes at least one of keywords and key images, and the key element is determined by at least one of the following methods: The pre-trained deep learning model is invoked to extract features from the selected information to obtain information features. The deep learning model is then invoked to perform image segmentation processing on the information features to obtain the key image in the selected information. The deep learning model is invoked to perform semantic understanding processing on the information features to obtain the keywords in the selected information.
  12. The method according to any one of claims 1 to 5, wherein the first editing operation further includes an input operation; The response to a first editing operation in the first interactive interface, displaying the first special effects animation formed by the editing, includes: The input controls are displayed on the first interactive interface; In response to an input operation on the input control, the input content corresponding to the input operation is played in the first special effects animation, wherein the type of the input content includes: text, image, audio and video.
  13. The method according to any one of claims 1 to 12, wherein, prior to the display of the first special effects animation formed by editing, the method further comprises: The preview information of the first special effects animation created by editing is displayed. The preview information is used to display the first special effects animation and includes at least one of the following: a scene in the first special effects animation and keyframes in the first special effects animation.
  14. The method according to any one of claims 1 to 13, wherein playing the first special effects animation includes: Play the first special effects animation in any of the following ways: The first special effects animation is played in a floating window above the first information; Play the first special effects animation in the pre-configured area of the first information; Play the first special effects animation in an interface other than the information flow interface.
  15. The method according to any one of claims 1 to 14, wherein, after playing the first special effects animation, the method further comprises: The sharing control is displayed in the first information; In response to a trigger operation on the sharing control, the first information carrying the first special effects animation is sent to the terminal device of the second object, wherein the second object is a different object from the first object browsing the information flow interface.
  16. According to the method of claim 15, wherein sending the first information carrying the first special effects animation to the terminal device of the second object includes: The first information may be sent to the terminal device of the second object in any of the following ways: The second information is published in the information flow interface of the second object, wherein the second information is forwarding information generated based on the first information and the first special effects animation; The first information displays the object name of the second object and a pointing symbol associated with the object name, wherein the pointing symbol represents sending the prompt information of the sharing message to the terminal device of the second object, and the prompt information is used to display in the prompt information list of the information flow interface corresponding to the second object; The shared information is displayed in the chat interface corresponding to the second object.
  17. According to the method of claim 15 or 16, when there are multiple second objects and multiple first special effects animations, before sending the first information carrying the first special effects animation to the terminal device of the second object, the method further includes: The target first special effects animation carried by each of the first information is determined by at least one of the following methods: In response to a selection operation for any one of the first special effects animations, the selected first special effects animation is taken as the target first special effects animation, wherein the target first special effects animation carried by each of the first information is the same; When the number of the first special effects animations is greater than the number of the second objects, the target first special effects animation corresponding to each piece of first information is determined according to the mapping relationship between the editing order and the object selection order, wherein the editing order is the editing order of the first special effects animations, and the object selection order is the order in which the second objects are selected; When the number of the first special effects animations is less than the number of the second objects, extract the target first special effects animation carried by each of the multiple first special effects animations.
  18. According to the method of claim 15 or 16, when there are multiple second objects and multiple first special effects animations, before sending the first information carrying the first special effects animation to the terminal device of the second object, the method further includes: The deep learning model is invoked to extract features from each of the second objects to obtain the features of the second objects. The deep learning model is trained using sample object data and sample special effects animation data. A deep learning model is invoked to extract features from each of the first special effects animations to obtain animation features; Determine a third similarity between each of the second object features and each of the animation features; For each piece of first information to be sent from the second object, the first special effect animation corresponding to the maximum third similarity of the features of the second object is used as the target first special effect animation carried by the first information to be sent.
  19. The method according to any one of claims 12 to 18, wherein the first information is used to enable the terminal device of the second object to perform the following processing: Display the first information, wherein the first information includes the first interactive control; In response to a trigger operation on the first interactive control, a third special effects animation is displayed; In response to a trigger operation on the sharing control in the first information, the first information carrying the third special effects animation is sent to the terminal device of a third object, wherein the third object is a different object from the first object and the second object.
  20. The method according to any one of claims 12 to 18, wherein the first information is used to enable the terminal device of the second object to perform the following processing: Display the first information, wherein the first information includes the first interactive control; In response to a trigger operation on the first interactive control, a second interactive interface is displayed; In response to the second editing operation in the second interactive interface, the fourth special effects animation generated by the editing is displayed; In response to a trigger operation on the sharing control in the first information, the first information carrying the fourth special effects animation is sent to the terminal device of a third object, wherein the third object is a different object from the first object and the second object.

Description

Interactive processing methods, devices, electronic devices, computer-readable storage media, and computer program products Cross-reference of related applications This application is based on and claims priority to Chinese Patent Application No. 2024115407461, filed on October 30, 2024, the entire contents of which are incorporated herein by reference. Technical Field This application relates to the field of computer technology, and in particular to an interactive processing method, apparatus, electronic device, computer-readable storage medium, and computer program product. Background Technology In related technologies, the interactive methods in information flow interfaces include, but are not limited to, liking, commenting, and forwarding. These methods are relatively limited, and information exchange in information flow interfaces is mostly confined to static content such as text and images. Some complex interactive functions require the installation of specific applications to function, resulting in limited triggering methods. Furthermore, for users unfamiliar with new or advanced technologies, complex triggering methods may create an entry barrier, impacting interaction efficiency. Simultaneously, the fixed triggering logic of interactive controls in related technologies makes it difficult to support complex interactive processes. Introducing complex dynamic content, due to the lack of a unified resource management mechanism, can lead to uneven resource load, often resulting in resource contention during playback, causing interface stuttering, large frame rate fluctuations, prolonged loading time, and increased device power consumption. Currently, there is no good way to enrich the interactive forms in the information flow interface and improve the efficiency of interaction. Summary of the Invention This application provides an interactive processing method, apparatus, electronic device, computer-readable storage medium, and computer program product, which can enrich the interactive forms in the information flow interface and improve the efficiency of interaction. The technical solution of this application embodiment is implemented as follows: This application provides an interactive processing method, which is executed by an electronic device, and the method includes: The first information is displayed in the information flow interface, wherein the first information includes a first interactive control; In response to a first trigger operation on the first interactive control, a first interactive interface is displayed, wherein the first interactive interface is used to edit special effects animation; In response to a first editing operation in the first interactive interface, a first special effects animation generated by the editing is displayed; In response to a second triggering operation on the first special effects animation, the first special effects animation is played. This application provides an interactive processing method, which is executed by an electronic device, and the method includes: The first information is displayed in the information flow interface, wherein the first information includes a first interactive control and a free area; In response to a first trigger operation on the first interactive control, a first special effects animation carrying recommended material is displayed in the free area. This application provides an interactive processing device, including: The display module is configured to display first information in the information flow interface, wherein the first information includes a first interactive control; The display module is further configured to display a first interactive interface in response to a first trigger operation on the first interactive control, wherein the first interactive interface is used for editing special effects animation; The display module is further configured to display a first special effects animation generated by editing in response to a first editing operation in the first interactive interface; The display module is further configured to play the first special effects animation in response to a second trigger operation on the first special effects animation. This application provides an interactive processing device, the device comprising: The display module is configured to display first information in the information flow interface, wherein the first information includes a first interactive control and a free area; The display module is further configured to display a first special effects animation carrying recommended materials in the free area in response to a first trigger operation on the first interactive control. This application provides an electronic device, the electronic device comprising: Memory is used to store executable instructions or computer programs. The processor, when executing computer-executable instructions or computer programs stored in the memory, implements the interactive processing method pr