Search

CN-122018740-A - Immersion type multi-module feedback man-machine interaction method and system based on visual dominance

CN122018740ACN 122018740 ACN122018740 ACN 122018740ACN-122018740-A

Abstract

The invention is suitable for the technical field of man-machine interaction, and provides an immersive multi-mode feedback man-machine interaction method and system based on visual dominance. The method and the device have the advantages that binocular monitoring is conducted on the opposite area of the display, unlimited standard human-computer interaction is conducted on the device and the user when the device is in a state of wearing glasses, limiting and screening are conducted on a plurality of standard interaction applications when the device is in a state of taking glasses, the application interaction size is matched, the application display position is planned, and partial limited amplified human-computer interaction is conducted on the device and the user. And in the state of picking up the glasses, limiting and screening a plurality of standard interaction applications, matching application interaction sizes, planning application display positions, and carrying out partially limited amplified man-machine interaction with the users, thereby automatically identifying vision state differences of the users, carrying out self-adaptive adjustment, ensuring accurate and continuous interaction operation of the users and improving the use experience of effective information browsing.

Inventors

  • CHEN KEREN
  • HAN YAN
  • ZHANG LONGFAN

Assignees

  • 成都职业技术学院
  • 智菲科技集团有限公司

Dates

Publication Date
20260512
Application Date
20260413

Claims (10)

  1. 1. The immersion type multi-module feedback man-machine interaction method based on visual dominance is characterized by comprising the following steps of: Binocular monitoring is carried out on the display surface area, binocular monitoring data are obtained, the binocular monitoring data are identified, and the current face state and the face space position of the user are determined; when the current face state is a state of wearing glasses, determining a plurality of standard interaction applications, and carrying out unrestricted standard human-computer interaction with a user; when the current face state is a glasses-picking state, limiting and screening are carried out on a plurality of standard interactive applications, a plurality of important interactive applications are shielded, and a plurality of common interactive applications are reserved; Determining a face vertical distance according to the face space position, matching an application interaction size, and planning an application display position according to the face space position; And carrying out partially limited amplified human-computer interaction with a user at the application display position according to the application interaction size and the plurality of common interaction applications.
  2. 2. The visual-dominant-based immersive multi-module feedback human-computer interaction method of claim 1, wherein the binocular monitoring is performed on the display facing area, binocular monitoring data is obtained, the binocular monitoring data is identified, and the current facial state and the facial space position of the user are determined specifically by the following steps: binocular monitoring is carried out on the opposite area of the display, and binocular monitoring data are obtained; Acquiring characteristic data of glasses; Based on the glasses characteristic data, carrying out characteristic recognition and matching on the binocular monitoring data, and determining the current face state of the user; And carrying out facial space positioning on the binocular monitoring data, and determining the facial space position of the user.
  3. 3. The visual dominant-based immersive multi-module feedback human-computer interaction method of claim 1, wherein when the current face state is a state of wearing glasses, determining a plurality of standard interaction applications, and performing unrestricted standard human-computer interaction with a user specifically comprises the following steps: when the current face state is a state of wearing glasses, a standard display interface is created; Determining a plurality of standard interactive applications; receiving a first interactive operation of a user in the standard display interface; And carrying out unrestricted interactive response of standard display on the first interactive operation based on a plurality of standard interactive applications.
  4. 4. The visual dominant-based immersive multi-module feedback human-computer interaction method of claim 1, wherein when the current face state is an off-hook state, performing restriction screening on a plurality of standard interaction applications, shielding a plurality of important interaction applications, and reserving a plurality of common interaction applications comprises the following steps: creating an enlarged display interface when the current face state is a glasses-off state; Acquiring application basic information of a plurality of standard interactive applications; Performing attribute identification on a plurality of application basic information, and recording attribute identification results; classifying a plurality of standard interactive applications according to the attribute identification result to determine a plurality of important interactive applications and a plurality of common interactive applications; And performing restriction screening, shielding a plurality of important interactive applications, and reserving a plurality of common interactive applications.
  5. 5. The visual-dominant-based immersive multi-module feedback human-computer interaction method of claim 4, wherein said determining a face vertical distance according to said face spatial position, matching an application interaction size, and planning an application display position according to said face spatial position specifically comprises the steps of: Determining a face vertical distance according to the face space position; Matching corresponding application interaction sizes according to the vertical distance of the face; Based on the application interaction size, carrying out uniform display division to obtain a plurality of uniform-size interaction display areas, and determining the center position of the corresponding area; Creating a plurality of region center lines perpendicular to the display surface according to the region center positions; Calculating the distance separating the facial space position from a plurality of the region centerlines; An application display position is selected from a plurality of region center positions in accordance with a plurality of the separation distances.
  6. 6. The visual dominant based immersive multi-module feedback human-computer interaction method of claim 5, wherein said partially restricted enlarged human-computer interaction with the user at said application display location according to said application interaction size and a plurality of said common interaction applications comprises the steps of: Selecting a current display area from a plurality of interactive display areas according to the application display position; Dynamically displaying an amplified display interface in the current display area; determining an interactive amplification proportion according to the application interactive size; according to the interactive amplification proportion, amplifying and displaying application icons, application windows, interactive windows and/or interactive icons in the amplification display interface; receiving a second interactive operation of a user in the enlarged display interface; And carrying out amplified display of the limited interactive response on the second interactive operation based on a plurality of the common interactive applications.
  7. 7. The immersive multi-module feedback man-machine interaction system based on visual dominance is characterized by comprising a binocular monitoring and identification module, a standard interaction feedback module, an application limiting and screening module, a display position planning module and an amplification interaction feedback module, wherein: The binocular monitoring and identifying module is used for carrying out binocular monitoring on the display opposite area, acquiring binocular monitoring data, identifying the binocular monitoring data and determining the current face state and the face space position of the user; The standard interaction feedback module is used for determining a plurality of standard interaction applications when the current face state is a state of wearing glasses, and carrying out unrestricted standard human-computer interaction with a user; the application limiting and screening module is used for limiting and screening a plurality of standard interaction applications when the current face state is a glasses-picking state, shielding a plurality of important interaction applications and reserving a plurality of common interaction applications; the display position planning module is used for determining a face vertical distance according to the face space position, matching an application interaction size and planning an application display position according to the face space position; And the amplifying interaction feedback module is used for carrying out partially limited amplifying man-machine interaction with a user at the application display position according to the application interaction size and the plurality of common interaction applications.
  8. 8. The visual-dominant-based immersive multi-module feedback human-computer interaction system of claim 7, wherein the application restriction screening module specifically comprises: an interface creation unit for creating an enlarged display interface when the current face state is an eye-picking state; an application information acquisition unit, configured to acquire application basic information of a plurality of standard interactive applications; The attribute identification unit is used for carrying out attribute identification on a plurality of application basic information and recording attribute identification results; the application classification unit is used for classifying a plurality of standard interaction applications according to the attribute identification result and determining a plurality of important interaction applications and a plurality of common interaction applications; and the restriction screening unit is used for performing restriction screening, shielding a plurality of important interactive applications and reserving a plurality of common interactive applications.
  9. 9. The visual-dominant-based immersive multi-module feedback human-computer interaction system of claim 8, wherein the display position planning module specifically comprises: A face distance determining unit configured to determine a face vertical distance according to the face space position; the interaction size matching unit is used for matching corresponding application interaction sizes according to the vertical distance of the face; The area position determining unit is used for carrying out uniform display division based on the application interaction size to obtain a plurality of interaction display areas with uniform sizes and determining the center positions of the corresponding areas; A center line creation unit configured to create a plurality of area center lines perpendicular to the display surface, in accordance with a plurality of the area center positions; a distance calculating unit for calculating distances between the face space position and the center lines of the plurality of regions; And a display position selecting unit for selecting an application display position from a plurality of region center positions according to a plurality of the separation distances.
  10. 10. The visual-dominant-based immersive multi-module feedback human-computer interaction system of claim 9, wherein the amplified interaction feedback module specifically comprises: a display area selection unit configured to select a current display area from a plurality of the interactive display areas according to the application display position; the interface display unit is used for dynamically displaying an amplified display interface in the current display area; the amplification proportion determining unit is used for determining the interactive amplification proportion according to the application interactive size; The amplifying display unit is used for amplifying and displaying application icons, application windows, interaction windows and/or interaction icons in the amplifying display interface according to the interaction amplifying proportion; an operation receiving unit, configured to receive a second interaction operation of a user in the enlarged display interface; and the interactive response unit is used for limiting interactive response of amplifying and displaying the second interactive operation based on a plurality of common interactive applications.

Description

Immersion type multi-module feedback man-machine interaction method and system based on visual dominance Technical Field The invention belongs to the technical field of man-machine interaction, and particularly relates to an immersive multi-mode feedback man-machine interaction method and system based on visual dominance. Background Human-computer interaction is a technical process of perceiving, modeling, analyzing and mapping user behaviors, instructions or state information, converting human intention into machine executable instructions, and feeding back machine operation results to people in a perceivable form. Computer equipment is the most common man-machine interaction carrier at present, and the man-machine interaction process is mainly realized by virtue of a display, a mouse and a keyboard. In the prior art, the man-machine interaction of the computer equipment generally adopts unified and fixed display and interaction parameter setting, can not carry out self-adaptive adjustment according to the vision state difference of the user, and especially for myopia users, can not carry out presentation and control of different interaction feedback according to the state of wearing glasses and the state of picking glasses, so that the use experience of the user for carrying out accurate and continuous interaction operation and effective information browsing is affected. Disclosure of Invention The embodiment of the invention aims to provide an immersion type multi-module feedback man-machine interaction method and system based on visual dominance, and aims to solve the technical problems in the prior art mentioned in the background art. The embodiment of the invention is realized as follows: an immersion type multi-module feedback man-machine interaction method based on visual dominance, which specifically comprises the following steps: Binocular monitoring is carried out on the display surface area, binocular monitoring data are obtained, the binocular monitoring data are identified, and the current face state and the face space position of the user are determined; when the current face state is a state of wearing glasses, determining a plurality of standard interaction applications, and carrying out unrestricted standard human-computer interaction with a user; when the current face state is a glasses-picking state, limiting and screening are carried out on a plurality of standard interactive applications, a plurality of important interactive applications are shielded, and a plurality of common interactive applications are reserved; Determining a face vertical distance according to the face space position, matching an application interaction size, and planning an application display position according to the face space position; And carrying out partially limited amplified human-computer interaction with a user at the application display position according to the application interaction size and the plurality of common interaction applications. As a further limitation of the technical solution of the embodiment of the present invention, the binocular monitoring is performed on the display opposite area, the binocular monitoring data is obtained, the binocular monitoring data is identified, and the determining the current face state and the face space position of the user specifically includes the following steps: binocular monitoring is carried out on the opposite area of the display, and binocular monitoring data are obtained; Acquiring characteristic data of glasses; Based on the glasses characteristic data, carrying out characteristic recognition and matching on the binocular monitoring data, and determining the current face state of the user; And carrying out facial space positioning on the binocular monitoring data, and determining the facial space position of the user. As a further limitation of the technical solution of the embodiment of the present invention, when the current face state is a state of wearing glasses, determining a plurality of standard interaction applications, and performing unrestricted standard man-machine interaction with a user specifically includes the following steps: when the current face state is a state of wearing glasses, a standard display interface is created; Determining a plurality of standard interactive applications; receiving a first interactive operation of a user in the standard display interface; And carrying out unrestricted interactive response of standard display on the first interactive operation based on a plurality of standard interactive applications. As a further limitation of the technical solution of the embodiment of the present invention, when the current face state is a glasses-off state, performing restriction screening on a plurality of standard interactive applications, shielding a plurality of important interactive applications, and reserving a plurality of common interactive applications, where the method specifically includes the following