CN-122018697-A - Self-adaptive fixation point display method and system based on content perception and user calibration
Abstract
The invention discloses a self-adaptive fixation point display method and a self-adaptive fixation point display system based on content perception and user calibration, wherein the self-adaptive fixation point display method and the self-adaptive fixation point display system comprise the steps of executing visual attention range calibration when being started for the first time, and obtaining the individualized initial fixation area parameters of a user; the method comprises the steps of acquiring the position of a fixation point in real time through an eye tracking module, classifying the movement state of an eye ball, identifying the content semantic category of a current display picture, dynamically determining the geometric parameters of a high-fidelity rendering area according to the content semantic category and initial fixation area parameters, executing the rendering operation of reducing power consumption outside the high-fidelity rendering area, monitoring the content change in the area, and executing the temporary visual enhancement operation on the change area when the key aperiodic change is detected. According to the invention, the clear display area can be dynamically adjusted according to the screen content and the personalized visual characteristics of the user, the significant energy saving is realized, the user is ensured not to miss important information through an intelligent visual enhancement mechanism, and the display efficiency, the power consumption control and the user experience are considered.
Inventors
- OUYANG XIAOHUI
Assignees
- 冠捷显示科技(中国)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260211
Claims (10)
- 1. The self-adaptive fixation point display method based on content perception and user calibration is characterized by comprising the following steps of: s1, when a function is started for the first time, executing a visual attention range calibration test, and acquiring and storing initial sight-injecting area parameters of individuation of a user; S2, acquiring the gaze point position of the user in real time through an eye tracking module, and identifying the content semantic category of the current display picture; s3, dynamically determining geometric parameters of the high-fidelity rendering area according to the content semantic category and the initial gazing area parameters; S4, performing power consumption optimization operation on the display area outside the high-fidelity rendering area; and S5, monitoring content change in the power consumption optimizing operation area, and executing temporary visual enhancement operation on the area where the key change event occurs when the key change event meeting the preset requirement is detected.
- 2. The adaptive gaze point display method based on content awareness and user calibration of claim 1, wherein step S1 comprises the following specific procedures: Presenting a test picture on a display screen, wherein the test picture comprises a central gazing prompting point and a plurality of interactive target elements which are randomly distributed in the peripheral area of the central gazing prompting point; receiving click operation of a user on the interactive target element through the input device, and recording position coordinates of the target element which is successfully clicked; calculating to obtain a personalized attention range boundary taking the central gazing prompting point as a center through a geometric fitting algorithm based on the recorded position coordinates; And storing the parameters of the personalized attention range boundary as initial gazing area parameters.
- 3. The method for adaptive gaze point display based on content awareness and user calibration of claim 1, wherein in step S2, the process of obtaining the user 'S gaze point location comprises real-time classification of the user' S eye movement data: The eye movement state is classified as a fixation event or a saccade event based on a preset speed-acceleration threshold model or a machine learning classifier.
- 4. The adaptive gaze point display method based on content perception and user calibration of claim 3, wherein said step S3 is performed only when the current state of eye movement is classified as a gaze event, and wherein the system maintains the position of the high-fidelity rendering region unchanged when classified as a glance event, or smoothly transitions the position of the high-fidelity rendering region through a predictive algorithm based on a historical gaze point trajectory.
- 5. The adaptive gaze point display method of claim 1 based on content awareness and user calibration, wherein: The step of monitoring the content change in step S5 includes: identifying whether the detected content change has a periodic characteristic; if the content change is identified as having periodic characteristics, the temporary visual enhancement operation is suppressed.
- 6. The adaptive gaze point display method based on content awareness and user calibration of claim 5 wherein for aperiodic content changes, priority assessment is performed based on semantic information of the content changes, which is classified into high priority, medium priority, and low priority; Only temporary visual enhancement operations are performed on high-priority and medium-priority content changes.
- 7. The adaptive gaze point display method of claim 1 based on content awareness and user calibration, wherein: In step S4, performing a reduced power consumption rendering operation includes, for an OLED or Micro-LED screen, turning off the brightness of pixels outside the high-fidelity rendering area, and for an LCD screen, applying a reduced visual clarity filter outside the high-fidelity rendering area.
- 8. The adaptive gaze point display method of claim 1 based on content awareness and user calibration, wherein: The duration of the visual enhancement operation is 300 milliseconds to 800 milliseconds.
- 9. An adaptive gaze point display system based on content awareness and user calibration for implementing the method of any one of claims 1 to 8, the system comprising: The eye movement tracking module is used for acquiring eyeball movement data of a user in real time, classifying the movement state of the eyeball through a preset speed-acceleration threshold model or a machine learning classifier, and outputting a judgment result of a gazing event or a glancing event; the content analysis module is used for identifying the content semantic category of the current display picture and monitoring the content change of the non-gazing area; the decision engine is used for receiving the judging result of the gazing event and dynamically determining the geometric parameters of the high-fidelity rendering area according to the content semantic category and the user individualized initial gazing area parameter, outputting an instruction to maintain the position of the high-fidelity rendering area unchanged during the glancing event, or carrying out smooth transition on the position of the high-fidelity rendering area through a prediction algorithm based on the historical gazing point track; the partition rendering control module receives a control instruction of the decision engine, performs power consumption optimization operation on the non-high fidelity rendering area, and performs temporary visual enhancement operation on the appointed change area.
- 10. The adaptive gaze point display system based on content awareness and user calibration of claim 1 wherein said eye tracking module comprises a device or software system capable of outputting information of the position of a user's gaze point in a screen coordinate system.
Description
Self-adaptive fixation point display method and system based on content perception and user calibration Technical Field The invention relates to the technical field of display control, in particular to a self-adaptive gaze point display method and system based on content perception and user calibration. Background Currently, eye tracking technology has been applied in a number of fields. In the basic algorithm level, the prior art utilizes a machine learning model to improve the accuracy and the robustness of gaze point prediction by fusing face orientation, eye images and ambient light information (such as glare). At the application level, a partial solution enables gaze point based zone rendering, e.g. in Virtual Reality (VR) scenes, high resolution rendering of the user's gaze center area, while reducing the level of detail for the peripheral areas to save the computational effort of the Graphics Processing Unit (GPU). In addition, there are studies exploring the use of eye movement data for attention state monitoring, such as assessing the cognitive load of a driver in a driving scenario. The prior art has obvious defects, and can not meet the requirements of extremely energy saving and lossless information experience on general computing equipment (such as a notebook computer and a tablet computer): Policy rigidification, namely, the existing partition rendering scheme mostly adopts fixed or simply zoomed fixation areas, and the geometric form of the high-fidelity area cannot be dynamically adjusted according to the semantics (such as text, images and UI interfaces) of screen contents, so that excessive interference is introduced during reading, or key details are lost during browsing images. And the information shielding risk is that the non-gazing area is subjected to permanent degradation treatment (such as blurring and darkening), so that new information, false prompts or important content changes in the area can be completely shielded, and a user is very easy to miss key information, thereby causing misoperation or experience degradation. The existing scheme does not consider individual differences (such as age, eyesight and use habit) of users, and does not provide a scientific initialization flow and a convenient manual adjustment mechanism, so that the system is 'cut at one time' and cannot truly meet the demands of users. The application scene limitation is that the mainstream fixation point rendering technology is mainly oriented to closed high-computation force scene such as VR/AR, and the goal is to reduce GPU load, but not to optimize the hardware-level pixel power consumption of the self-luminous display screen such as OLED. Disclosure of Invention The invention aims to provide a self-adaptive gaze point display method and a self-adaptive gaze point display system based on content perception and user calibration, which can ensure that a user obtains concentrated, efficient and information-free visual experience by dynamically adapting the attention range of the user and intelligent visual enhancement key change while obviously reducing the power consumption of a screen. The technical scheme adopted by the invention is as follows: The self-adaptive fixation point display method based on content perception and user calibration is characterized by comprising the following steps of: s1, when a function is started for the first time, executing a visual attention range calibration test, and acquiring and storing initial sight-injecting area parameters of individuation of a user; S2, acquiring the gaze point position of the user in real time through an eye tracking module, and identifying the content semantic category of the current display picture; s3, dynamically determining geometric parameters of the high-fidelity rendering area according to the content semantic category and the initial gazing area parameters; S4, performing power consumption optimization operation on the display area outside the high-fidelity rendering area; and S5, monitoring content change in the power consumption optimizing operation area, and executing temporary visual enhancement operation on the area where the key change event occurs when the key change event meeting the preset requirement is detected. Further, the specific process of step S1 is as follows: Presenting a test picture on a display screen, wherein the test picture comprises a central gazing prompting point and a plurality of interactive target elements which are randomly distributed in the peripheral area of the central gazing prompting point; receiving click operation of a user on the interactive target element through the input device, and recording position coordinates of the target element which is successfully clicked; calculating to obtain a personalized attention range boundary taking the central gazing prompting point as a center through a geometric fitting algorithm based on the recorded position coordinates; And storing the parameters of the pe