Search

CN-122001841-A - Non-transitory machine-readable storage medium, method and apparatus for chat management

CN122001841ACN 122001841 ACN122001841 ACN 122001841ACN-122001841-A

Abstract

The present disclosure relates to non-transitory machine-readable storage media, methods, and apparatuses for chat management. A computer-readable medium comprising computer-readable instructions is provided. When the instructions are executed by a computer, the computer may implement a method. According to the method, context information for a plurality of users in a conversation over a period of time is generated based on messages from the plurality of users. Context information for a plurality of users is then sent to a first Artificial Intelligence (AI) language model as input for training the AI language model, and a request is sent to the first AI language model, wherein the request requires a response associated with the context information.

Inventors

  • ROBERT VAUGHN

Assignees

  • 英特尔公司

Dates

Publication Date
20260508
Application Date
20250930
Priority Date
20241104

Claims (20)

  1. 1. A computer readable medium comprising computer readable instructions, the computer readable instructions, when executed, implement a method comprising: Generating context information for a plurality of users in a conversation over a period of time based on messages from the plurality of users; Transmitting context information of the plurality of users to a first Artificial Intelligence (AI) language model of the dialog, wherein the context information is transmitted as input for training the first AI language model, and After sending the context information of the plurality of users, a request is sent to the first AI language model, wherein the request requires a response associated with the context information.
  2. 2. The computer-readable medium of claim 1, wherein the method further comprises: receiving a first request for a first user to initiate the dialog; assigning a first token corresponding to the dialog to the first user; Receiving a second request for a second user to join the session, wherein the second request includes a second token, and Based on a second token included in the second request, determining whether to allow the second user to join the conversation, wherein the first user and the second user are users of the plurality of users.
  3. 3. The computer readable medium of claim 1 or 2, wherein the method further comprises: receiving a response corresponding to the request from the first AI language model, and And respectively sending the responses to the plurality of users.
  4. 4. The computer-readable medium of claim 1, Wherein the context information of the plurality of users is generated based on the chronological order of receiving messages from the plurality of users and/or the meaning of the messages from the plurality of users, and Wherein the response associated with the context information is edited information based on the context information.
  5. 5. The computer-readable medium of claim 2, wherein each of the first token and the second token includes a session identifier portion and a participant identifier portion.
  6. 6. The computer readable medium of claim 2 or 5, wherein each of the first token and the second token is encrypted, and wherein each token includes encrypted metadata for decrypting the token.
  7. 7. The computer-readable medium of claim 1, wherein the period of time begins at a start point of the conversation or at a point in time between the start point and an end point of the conversation.
  8. 8. The computer-readable medium of claim 1, wherein the method further comprises: a plurality of sub-sessions are established coupling the plurality of users with a plurality of dedicated AI language model instances, respectively.
  9. 9. The computer-readable medium of claim 8, wherein the method further comprises: the private context information for each of the plurality of users is sent to a respective dedicated AI language model instance via a respective sub-session of each of the plurality of sub-sessions.
  10. 10. The computer-readable medium of claim 9, wherein the method further comprises: receiving a plurality of responses sent by the plurality of dedicated AI language model instances over the plurality of sub-sessions, and The plurality of responses are imported into a common session holding chat messages available to a plurality of users in the conversation.
  11. 11. The computer-readable medium of claim 10, wherein importing the plurality of responses into a common session comprises: cleaning the plurality of responses, and The cleaned response is combined with the public message from the user in the public session.
  12. 12. The computer readable medium of any of claims 9 to 11, wherein the method further comprises: based on semantic analysis, private context information of the plurality of users is determined from the overall information sent by the plurality of users.
  13. 13. The computer-readable medium of claim 1, wherein the method further comprises: allocating a shared memory for common context information from said plurality of users in said conversation, and A plurality of private memories are allocated for each of the plurality of users in the conversation for storing private context information of the plurality of users, respectively.
  14. 14. The computer-readable medium of claim 13, wherein the method further comprises: receiving an access request from the first AI language model for accessing a first private memory of a first user; Determining that the access request is to generate a response to the first user's request, and Based on the determination, access to the first private memory is allowed.
  15. 15. A method, comprising: Generating context information for a plurality of users in a conversation over a period of time based on messages from the plurality of users; Transmitting context information of the plurality of users to a first Artificial Intelligence (AI) language model of the dialog, wherein the context information is used as input for training the first AI language model, and After sending the context information, a request is sent to the first AI language model, wherein the request requires a response associated with the context information.
  16. 16. The method of claim 15, wherein the method further comprises: receiving a first request for a first user to initiate the dialog; assigning a first token corresponding to the dialog to the first user; Receiving a second request for a second user to join the session, wherein the second request includes a second token, and Based on a second token included in the second request, determining whether to allow the second user to join the conversation, wherein the first user and the second user are users of the plurality of users.
  17. 17. An apparatus, comprising: An interface and a processing circuit, wherein the processing circuit is configured with a trusted execution environment for executing machine-readable instructions within the trusted execution environment to: Generating context information for a plurality of users in a conversation over a period of time based on messages from the plurality of users; Transmitting context information of the plurality of users to a first Artificial Intelligence (AI) language model of the dialog, wherein the context information is transmitted as input for training the first AI language model, and After sending the context information of the plurality of users, a request is sent to the first AI language model, wherein the request requires a response associated with the context information.
  18. 18. The apparatus of claim 17, wherein the processing circuit is further configured to: receiving a first request for a first user to initiate the dialog; assigning a first token corresponding to the dialog to the first user; Receiving a second request for a second user to join the session, wherein the second request includes a second token, and Based on a second token included in the second request, determining whether to allow the second user to join the conversation, wherein the first user and the second user are users of the plurality of users.
  19. 19. The apparatus of claim 17 or 18, wherein the processing circuit is further configured to: receiving a response corresponding to the request from the first AI language model, and And respectively sending the responses to the plurality of users.
  20. 20. An apparatus according to claim 17, Wherein the context information of the plurality of users is generated based on the chronological order of receiving messages from the plurality of users and/or the meaning of the messages from the plurality of users, and Wherein the response associated with the context information is edited information based on the context information.

Description

Non-transitory machine-readable storage medium, method and apparatus for chat management Technical Field The present disclosure relates to non-transitory machine-readable storage media, methods, and apparatuses for chat management. Background In a scenario where multiple users are integrated into one chat session, some language models either lack the ability to facilitate multi-user interactions within a single session or appear weak, which may limit their potential in terms of a collaborative, dynamic, and context-rich communication experience. Disclosure of Invention An aspect of the disclosure provides a computer-readable medium comprising computer-readable instructions that when executed implement a method comprising generating context (contextual) information for a plurality of users in a conversation over a period of time based on messages from the plurality of users, sending the context information for the plurality of users to a first Artificial Intelligence (AI) language model of the conversation, wherein the context information is sent as input for training the first AI language model, and after sending the context information for the plurality of users, sending a request to the first AI language model, wherein the request requires a response associated with the context information. An aspect of the present disclosure provides a method comprising generating context information for a plurality of users in a conversation over a period of time based on messages from the plurality of users, sending the context information for the plurality of users to a first Artificial Intelligence (AI) language model of the conversation, wherein the context information is used as input for training the first AI language model, and after sending the context information, sending a request to the first AI language model, wherein the request requires a response associated with the context information. An aspect of the disclosure provides an apparatus comprising an interface and a processing circuit, wherein the processing circuit is configured with a trusted execution environment to execute machine-readable instructions within the trusted execution environment to generate context information for a plurality of users in a conversation over a period of time based on messages from the plurality of users, send the context information for the plurality of users to a first Artificial Intelligence (AI) language model of the conversation, wherein the context information is sent as input for training the first AI language model, and send a request to the first AI language model after sending the context information for the plurality of users, wherein the request requires a response associated with the context information. Drawings Some examples of the apparatus and/or method will hereinafter be described, by way of example only, with reference to the accompanying drawings, in which: Fig. 1A shows a schematic diagram of an example of a system 100A for AI-based chat management. Fig. 1B shows a schematic diagram of an example of a system 100B for AI-based chat management. Fig. 1C shows a schematic diagram of an example of a system 100C for AI-based chat management. Fig. 2 illustrates an example of a method 200 of AI-based chat management. Fig. 3 illustrates an example of a method 300 of AI-based chat management. Fig. 4 illustrates an example of a method 400 associated with AI-based chat management using tokens. Fig. 5 illustrates an example of a method 500 associated with generating an adaptive response in AI-based chat. Fig. 6 illustrates an example of a method 600 associated with memory and data management in AI-based chat. Fig. 7 shows a block diagram of an example of an apparatus 700. Fig. 8 shows a block diagram of an example of an apparatus 800. Detailed Description Some examples are now described in more detail with reference to the accompanying drawings. However, other possible examples are not limited to the features of the embodiments described in detail. Other examples may include modifications of these features, equivalents and alternatives to these features. Furthermore, the terminology used herein to describe certain examples should not be limiting of other possible examples. Throughout the description of the drawings, the same or similar reference numerals refer to the same or similar elements and/or features, which may be the same or may be implemented in modified form while providing the same or similar functions. The thickness of lines, layers, and/or regions in the figures may also be exaggerated for clarity. When two elements a and B are used in combination "or" it is to be understood that all possible combinations are disclosed, i.e. only a, only B, and a and B, unless explicitly defined otherwise in the individual cases. For alternative wording of the same combination, at least one of "a and B" or "a and/or B" may be used. The same applies to combinations of two or more elements. If a singular form, such as "a