Search

CN-121979389-A - Virtual reality interaction method based on interaction intention

CN121979389ACN 121979389 ACN121979389 ACN 121979389ACN-121979389-A

Abstract

The application relates to the technical field of virtual reality, in particular to a virtual reality interaction method based on interaction intention, which comprises the steps of capturing physical operation of a user through input equipment and standardizing the physical operation into a unified input event; the method comprises the steps of obtaining the current interaction context of a user, wherein the interaction context comprises the remote interaction intention and the interaction state of the user, generating an intention mapping configuration file based on natural language, wherein the intention mapping configuration file is used for realizing judgment and conflict resolution in the interaction process, inquiring the intention mapping configuration file according to the interaction context and the input event, identifying the current near-end interaction intention of the user and matching the current near-end interaction intention to a corresponding action, and instantiating and executing the matched corresponding action to complete interaction in a virtual environment. The application can obviously reduce the learning cost of the user and provides a flexible and reliable interaction foundation for the meta-universe application. Correspondingly, the application further provides a virtual reality interaction system based on the interaction intention.

Inventors

  • XU NINGNING
  • LI KAIWEN
  • ZHANG YIPING

Assignees

  • 浙江万里学院

Dates

Publication Date
20260505
Application Date
20260122

Claims (10)

  1. 1. The virtual reality interaction method based on the interaction intention is characterized by comprising the following steps of: S1, capturing physical operation of a user through input equipment, and standardizing the physical operation into a unified input event; s2, acquiring the current interaction context of the user, wherein the interaction context comprises the remote interaction intention and the interaction state of the user; S3, inquiring an intention mapping configuration file according to the interaction context and the input event, identifying the current near-end interaction intention of the user, and matching the current near-end interaction intention to a corresponding action; wherein, the intention mapping profile is obtained by: Receiving a natural language instruction of a user, wherein the natural language instruction describes a high-level interaction intention of the user in a virtual reality environment; Acquiring a preset action library, wherein the action library comprises a plurality of actions, and each action is associated with text information describing the function of the action; invoking a pre-trained language model to analyze based on the natural language instruction and the text information of the actions in the action library, and screening out an action subset related to the high-level interaction intention from the action library; generating an intention mapping configuration file according to the screened action subset; s4, instantiating and executing the matched corresponding actions to complete interaction in the virtual environment; Before executing step S4, validity verification is performed on the current interaction context of the user, so as to obtain a verification result, and based on the verification result, the corresponding matched action is instantiated and executed.
  2. 2. The virtual reality interaction method of claim 1, wherein the validating the user's current interaction context to obtain a validation result, and instantiating and executing the matched corresponding action based on the validation result comprises: presetting one or more preposed interaction state combinations for representing real-time operation situations of users in a virtual environment; Setting a valid state attribute for each action, wherein the valid state attribute is used for defining one or more preposed interaction state combinations which are required to be satisfied by the corresponding action; after receiving user input and matching the user input with a plurality of target actions, acquiring the current interaction state of the user; comparing and verifying the current interaction state of the user with the effective state attributes of a plurality of target actions to obtain a verification result; Based on the verification result, the corresponding action of matching is instantiated and executed.
  3. 3. The method according to claim 1, wherein in step S3, if a plurality of corresponding actions are matched, conflict resolution is performed according to a priority rule to obtain an action most conforming to the current intention of the user.
  4. 4. A virtual reality interaction method based on interaction intents according to claim 3, characterized in that the conflict resolution according to priority rules comprises: Acquiring all the matched candidate actions; invoking a preset priority rule base, wherein the priority rule base comprises a plurality of conflict resolution rules for evaluating the execution priority of actions; Calculating a priority score for each candidate action based on rules in the priority rule base; and selecting the candidate action with the highest priority score as the action which is most in line with the current intention of the user.
  5. 5. The virtual reality interaction method based on interaction intention of claim 4, further comprising the steps of: When the priority number difference value of the plurality of candidate actions is smaller than a preset threshold value, starting an intention confirming process: Generating and outputting a differential sensory feedback signal for each candidate action; monitoring whether a user inputs a corrective operation signal or not in a preset time window after the differential sense feedback signal is output; based on the corrective action signal, an action that best meets the current intent of the user is determined from a plurality of candidate actions.
  6. 6. The virtual reality interaction method based on interaction intent of claim 5, wherein the distinguishing sensory feedback signal comprises at least one of: Virtual objects associated with different candidate actions present differentiated visual cue effects; Applying a differentiated haptic feedback pattern on the input device associated with the different candidate actions; the distinctive audio cues associated with the different candidate actions are played.
  7. 7. The method of claim 5, wherein monitoring whether the user inputs the corrective action signal comprises: detecting state changes of original input events, including changes of pressing time length, pressing force, input device displacement or gesture; And/or detecting whether a new auxiliary input event associated with the candidate action is triggered.
  8. 8. The method of claim 5, wherein determining, based on the corrective action signal, an action from the plurality of candidate actions that best meets a user's current intent comprises: If the corrective operation signal which is clearly related to a certain candidate action is monitored in the preset time window, the candidate action is selected as the action which is most in line with the current intention of the user; If any corrective action signal is not monitored within the preset time window, selecting the candidate action with the highest priority score as the action which is most in line with the current intention of the user.
  9. 9. The virtual reality interaction method of any one of claims 1-8, further comprising the steps of: presenting a visual configuration interface in a virtual reality environment; Displaying the mapping relation entries defined in the current intention mapping configuration file to a user through the visual configuration interface; Receiving an editing operation which is executed by a user on at least one mapping relation entry through the visual configuration interface, wherein the editing operation comprises modification, deletion, creation of a mapping relation or adjustment of mapping relation attributes; and in response to the editing operation, updating the intention mapping configuration file in real time.
  10. 10. A virtual reality interaction system based on interaction intent, the system comprising: The input event normalization module is configured to capture physical operations of a user through the input device and normalize the physical operations into a unified input event; The interactive context acquisition module is configured to acquire the current interactive context of the user, wherein the interactive context comprises the remote interactive intention and the interactive state of the user; The action matching module is configured to query an intention mapping configuration file according to the interaction context and the input event, identify the current near-end interaction intention of the user and match the current near-end interaction intention to a corresponding action; wherein, the intention mapping profile is obtained by: Receiving a natural language instruction of a user, wherein the natural language instruction describes a high-level interaction intention of the user in a virtual reality environment; Acquiring a preset action library, wherein the action library comprises a plurality of actions, and each action is associated with text information describing the function of the action; invoking a pre-trained language model to analyze based on the natural language instruction and the text information of the actions in the action library, and screening out an action subset related to the high-level interaction intention from the action library; generating an intention mapping configuration file according to the screened action subset; And before executing the interactive module, carrying out validity verification on the current interactive context of the user to obtain a verification result, and carrying out instantiation and execution of the matched corresponding action based on the verification result.

Description

Virtual reality interaction method based on interaction intention Technical Field The application relates to the technical field of virtual reality, in particular to a virtual reality interaction method based on interaction intention. Background With the rise of the meta-universe (METAVERSE) concept and the rapid development of virtual reality technology, VR interactive systems have become the core technology for constructing immersive digital experiences. In a virtual environment, how to accurately understand the user's intention and naturally map the user's physical operations to effective interactions in the virtual world is a core challenge faced by the current technology, driven by its mental process. Existing virtual reality systems typically map physical inputs such as buttons, rockers, etc. on the controller directly to specific game or application functions in a hard coded or fixed manner, or use different mappings based on different software, such as trigger keys when grabbing the object's keys in steam vrhome, whereas in VRChat and OpenXR this key is provided as a side grip (grip key). The direct mapping mode can be used in a simple scene, but has the fundamental defect that the layering and situation dependence of the user intention cannot be understood in a complex and dynamic virtual environment, so that the same physical operation cannot trigger the functions which accord with the actual expectations of the user under different contexts. The interaction becomes stiff and non-intuitive, and is difficult to adapt to the personalized demands of users and cross-platform expansion, and the development of virtual reality to higher degree of freedom and immersion is severely restricted. In the prior art, for example, CN111625098B proposes an intelligent interaction method for an avatar based on multi-channel information fusion, which aims to improve the integrity and naturalness of the interaction of the avatar by combining multi-channel data such as facial image reconstruction, hand contour recognition, optical/inertial gesture tracking, eye movement tracking, and the like, and analyzing a user behavior sequence by using an LSTM time sequence model to identify the interaction intention. However, the core of the method still stays at the level of identifying the action executed by the user, the theoretical modeling and active understanding of the multi-level intention of the user are lacking, the dynamic intelligent mapping of the same physical operation in different contexts cannot be realized, the interactive logic depends on the time sequence analysis driven by data, the system architecture is complex, the calculation cost is high, and the customization, the usability and the openness are poor. Disclosure of Invention The invention aims to provide a virtual reality interaction method based on interaction intention, which partially solves or alleviates the defects in the prior art, can obviously reduce the learning cost of users and provides a flexible and reliable interaction foundation for meta-universe application. In order to solve the technical problems, the invention adopts the following technical scheme: in a first aspect of the present invention, there is provided a virtual reality interaction method based on interaction intention, comprising the steps of: S1, capturing physical operation of a user through input equipment, and standardizing the physical operation into a unified input event; s2, acquiring the current interaction context of the user, wherein the interaction context comprises the remote interaction intention and the interaction state of the user; S3, inquiring an intention mapping configuration file according to the interaction context and the input event, identifying the current near-end interaction intention of the user, and matching the current near-end interaction intention to a corresponding action; wherein, the intention mapping profile is obtained by: Receiving a natural language instruction of a user, wherein the natural language instruction describes a high-level interaction intention of the user in a virtual reality environment; Acquiring a preset action library, wherein the action library comprises a plurality of actions, and each action is associated with text information describing the function of the action; invoking a pre-trained language model to analyze based on the natural language instruction and the text information of the actions in the action library, and screening out an action subset related to the high-level interaction intention from the action library; generating an intention mapping configuration file according to the screened action subset; s4, instantiating and executing the matched corresponding actions to complete interaction in the virtual environment; Before executing step S4, validity verification is performed on the current interaction context of the user, so as to obtain a verification result, and based on the verification result, the corr