Search

US-12625903-B1 - Method for presenting interactive information, electronic device and storage medium

US12625903B1US 12625903 B1US12625903 B1US 12625903B1US-12625903-B1

Abstract

A method for presenting interactive information, an electronic device, and a non-transitory computer-readable storage medium are provided. The method includes: receiving query information input by a user; identifying, based on the query information and personalized information corresponding to the user, a query intention corresponding to the query information; determining a scene breakdown corresponding to the query information, and recalling, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object; combining the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information; and generating multiple multimedia contents corresponding to the recall object information set using a pre-trained generative model, and presenting the multiple multimedia contents in an information display interface.

Inventors

  • Ning Hu
  • Zhenyu Lu

Assignees

  • SERENDIPITY ONE INC

Dates

Publication Date
20260512
Application Date
20241203

Claims (16)

  1. 1 . A method for presenting interactive information, comprising: receiving query information input by a user; identifying, based on the query information and personalized information corresponding to the user, a query intention corresponding to the query information; determining a scene breakdown corresponding to the query information, and recalling, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object; combining the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information; generating multiple multimedia contents corresponding to the recall object information set using a pre-trained generative model, and presenting the multiple multimedia contents in an information display interface; wherein the determining a scene breakdown corresponding to the query information, comprises: identifying an entity word contained in the query information; constructing the scene breakdown corresponding to the query information based on the query intention and the entity word, the scene breakdown is used to construct a usage scenario; and wherein the recalling, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object, comprises: obtaining candidate objects corresponding to the query intention; and recalling the target object corresponding to the query information as well as the associated objects associated with the target object based on the personalized information, the scene breakdown, and object information of the candidate objects.
  2. 2 . The method according to claim 1 , wherein the identifying, based on the query information and personalized information, a query intention corresponding to the query information, comprises: obtaining a query feature corresponding to the query information; extracting personalized features associated with user intentions from the personalized information; integrating the personalized features to the query feature to obtain an enhanced query feature; and identifying the query intention corresponding to the query information based on the enhanced query feature.
  3. 3 . The method according to claim 2 , wherein the identifying the query intention corresponding to the query information based on the enhanced query feature, comprises: performing intent recognition processing on the enhanced query feature using a pre-trained intent recognition model, to obtain an intent recognition result corresponding to the enhanced query feature; and determining, based on real-time user data contained in the personalized information, the query intention corresponding to the query information from the intent recognition result.
  4. 4 . The method according to claim 3 , wherein the query information comprises query text and a query image; and the obtaining a query feature corresponding to the query information, comprises: extracting the query text and the query image from the query information; and processing the query text and the query image using a preset processing model to obtain the query feature corresponding to the query information.
  5. 5 . The method according to claim 1 , wherein the combining the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information, comprises: obtaining a first object feature corresponding to the target object and second object features corresponding to the associated objects, wherein the first object feature comprises attributes and information related to the target object, the second object features comprise attributes and information related to the associated objects; and combining, based on the personalized information and the combination manner of the target object and the associated objects, the first object feature and the second object features to obtain the recall object information set corresponding to the query information.
  6. 6 . The method according to claim 5 , wherein the combining, based on the personalized information and the combination manner of the target object and the associated objects, the first object feature and the second object features to obtain the recall object information set corresponding to the query information, comprises: combining the first object feature and the second object features based on the combination manner of the target object and the associated objects, to obtain a feature combination set of the first object feature and the second object features; determining a feature combination preference corresponding to the personalized information from the feature combination set; determining a display weight for each feature combination in the feature combination set based on the feature combination preference and the query intention; and generating the recall object information set corresponding to the query information based on the display weights and the feature combination set.
  7. 7 . The method according to claim 1 , further comprising: in response to a user's element addition operation, determining a visual element corresponding to the element addition operation; and generating a personalized element library corresponding to the user based on the visual element.
  8. 8 . The method according to claim 1 , further comprising: obtaining multimedia materials corresponding to an element splicing operation; generating personalized materials; and splicing, based on the element splicing operation, the personalized materials and the multimedia materials to obtain personalized spliced multimedia element.
  9. 9 . The method according to claim 1 , further comprising: in response to an embedding operation for the target object, presenting an embedded object corresponding to the embedding operation; and embedding visual elements corresponding to the target object into the embedded object.
  10. 10 . The method according to claim 9 , wherein after embedding the visual elements corresponding to the target object into the embedded object, the method further comprises: dynamically presenting the embedded object that contains the visual elements corresponding to the target object.
  11. 11 . The method according to claim 1 , further comprising: presenting comparison information between each multimedia content in the recall object information set.
  12. 12 . An electronic device, comprising: at least one processor; and a memory, coupled to the at least one processor and storing computer executable instructions thereon, which when executed by the at least one processor, cause the at least one processor to: receive query information input by a user; identify, based on the query information and personalized information corresponding to the user, a query intention corresponding to the query information; determine a scene breakdown corresponding to the query information, and recall, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object; wherein to determine a scene breakdown corresponding to the query information is caused to identify an entity word contained in the query information, and construct the scene breakdown corresponding to the query information based on the query intention and the entity word, the scene breakdown is used to construct a usage scenario; wherein to recall the target object and the associated objects, the at least one processor is caused to obtain candidate objects corresponding to the query intention and recall the target object corresponding to the query information as well as the associated objects associated with the target object, based on the personalized information, the scene breakdown, and object information of the candidate objects; combine the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information; and generate multiple multimedia contents corresponding to the recall object information set using a pre-trained generative model, and present the multiple multimedia contents in an information display interface.
  13. 13 . The electronic device according to claim 12 , wherein the at least one processor caused to identify, based on the query information and personalized information, a query intention corresponding to the query information is caused to: obtain a query feature corresponding to the query information; extract personalized features associated with user intentions from the personalized information; integrate the personalized features to the query feature to obtain an enhanced query feature; and identify the query intention corresponding to the query information based on the enhanced query feature.
  14. 14 . The electronic device according to claim 12 , wherein the at least one processor is further caused to: in response to a user's element addition operation, determine a visual element corresponding to the element addition operation; and generate a personalized element library corresponding to the user based on the visual element.
  15. 15 . The electronic device according to claim 12 , wherein the at least one processor is further caused to: in response to an embedding operation for the target object, present an embedded object corresponding to the embedding operation; and embed visual elements corresponding to the target object into the embedded object.
  16. 16 . A non-transitory computer-readable storage medium storing computer programs which, when executed by a processor, cause the processor to: receive query information input by a user; identify, based on the query information and personalized information corresponding to the user, a query intention corresponding to the query information; determine a scene breakdown corresponding to the query information, and recall, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object; wherein to determine a scene breakdown corresponding to the query information is caused to identify an entity word contained in the query information, and construct the scene breakdown corresponding to the query information based on the query intention and the entity word, the scene breakdown is used to construct a usage scenario; wherein to recall the target object and the associated objects, the processor is caused to obtain candidate objects corresponding to the query intention and recall the target object corresponding to the query information as well as the associated objects associated with the target object, based on the personalized information, the scene breakdown, and object information of the candidate objects; combine the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information; and generate multiple multimedia contents corresponding to the recall object information set using a pre-trained generative model, and present the multiple multimedia contents in an information display interface.

Description

TECHNICAL FIELD The present disclosure relates to the field of Internet technologies, and in particular, to a method for presenting interactive information, an electronic device and a storage medium. BACKGROUND With the development of network technologies and the popularization of intelligent terminal devices, people are increasingly accustomed to using various search platforms to search for information, for example, searching for commodity information, educational resources, service resources, fashion matching, gift recommendations, etc. Existing search engines typically return a large amount of web links and information, which forces users to spend a lot of time to read and filter relevant content. In addition, existing search engines often cannot fully understand the user's context and exact needs, resulting in search results that are independent and lack relevance to one another. Additionally, the existing search platforms rely on keywords contained in the user-input query information, aiming to find the most relevant single result and display related search results ranked by relevance. However, the contextual and vague needs of many users are difficult to express accurately through simple keywords, and what users actually need is a comprehensive solution, not just some related individual search results. Therefore, existing search platforms are unable to meet the demand for comprehensive solutions to complex problems. SUMMARY The present disclosure provides a method for presenting interactive information, an electronic device and a storage medium, which can meet comprehensive solution needs of users for complex problems and improve the comprehensiveness of interactive information presentation. According to a first aspect, the present disclosure provides a method for presenting interactive information, which includes: receiving query information input by a user;identifying, based on the query information and personalized information corresponding to the user, a query intention corresponding to the query information;determining a scene breakdown corresponding to the query information, and recalling, based on the query intention and the scene breakdown, a target object corresponding to the query information and associated objects associated with the target object;combining the target object and the associated objects according to a combination manner of the target object and the associated objects, to obtain a recall object information set corresponding to the query information; andgenerating multiple multimedia contents corresponding to the recall object information set using a pre-trained generative model, and presenting the multiple multimedia contents in an information display interface. According to a second aspect, the present disclosure provides an electronic device, which includes at least one processor and a memory. The memory is coupled to the at least one processor and stores computer executable instructions thereon, which when executed by the at least one processor, cause the at least one processor to perform the aforementioned method for presenting interactive information. According to a third aspect, the present disclosure provides a non-transitory computer-readable storage medium storing computer programs which, when executed by a processor, cause the processor to perform the aforementioned method for presenting interactive information. BRIEF DESCRIPTION OF THE DRAWINGS To illustrate the technical solutions in the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings described below are merely some embodiments of the present disclosure, and those skilled in the art may derive other drawings from these accompanying drawings without creative efforts. FIG. 1 is a schematic flowchart of a method for presenting interactive information according to some embodiments of the present disclosure. FIG. 2 is a schematic diagram of a front-end interactive interface according to some embodiments of the present disclosure. FIG. 3 is another schematic diagram of a front-end interactive interface according to some embodiments of the present disclosure. FIG. 4 is a schematic diagram of an information display interface according to some embodiments of the present disclosure. FIG. 5 is another schematic diagram of the information display interface according to some embodiments of the present disclosure. FIG. 6 is yet another schematic diagram of the information display interface according to some embodiments of the present disclosure. FIG. 7 is a schematic diagram of an object information interface according to some embodiments of the present disclosure. FIG. 8 is another schematic flowchart of a method for presenting interactive information according to some embodiments of the present disclosure. FIG. 9 is a schematic structural diagram of an electronic device according to some