Search

CN-121979381-A - Auxiliary learning method and system based on AI multi-mode interaction screen-throwing eye-protection desk lamp

CN121979381ACN 121979381 ACN121979381 ACN 121979381ACN-121979381-A

Abstract

The invention relates to the technical field of multi-mode interaction and discloses an auxiliary learning method and system based on an AI multi-mode interaction screen-throwing eye-protecting desk lamp, wherein the auxiliary learning method and system comprise the steps of integrating a full-dimensional environment sensing matrix of the desk lamp so as to acquire a high-dimensional sensing characteristic stream of a target user in a learning process in real time; the method comprises the steps of analyzing user portrait information of a target user to determine potential demands of the target user, calculating visual fatigue indexes and visual focuses of the target user to analyze current learning states of the target user, constructing an auxiliary learning environment of the target user, constructing a bidirectional interaction channel of the target user and the desk lamp, and optimizing learning environment characteristics of the auxiliary learning environment to execute auxiliary learning of the desk lamp. The invention can improve the adaptability and comprehensiveness of the auxiliary study of the desk lamp.

Inventors

  • LI LIANGHUA

Assignees

  • 深圳市圣品源实业有限公司

Dates

Publication Date
20260505
Application Date
20251224

Claims (10)

  1. 1. The auxiliary learning method based on the AI multi-mode interaction screen-throwing eye-protection desk lamp is characterized by comprising the following steps of: integrating the full-dimensional environment sensing matrix of the desk lamp so as to acquire the high-dimensional sensing characteristic flow of the target user in the learning process in real time; Analyzing user portrait information of the target user based on the high-dimensional perception feature stream to determine potential requirements of the target user, calculating visual fatigue indexes and visual focuses of the target user to analyze current learning states of the target user, and constructing an auxiliary learning environment of the target user based on the potential requirements and the current learning states; and constructing a bidirectional interaction channel between the target user and the desk lamp, and optimizing learning environment characteristics of the auxiliary learning environment based on the bidirectional interaction channel so as to execute auxiliary learning of the desk lamp.
  2. 2. The AI-multimode interactive screen-projection eye-protection desk lamp-based aided learning method of claim 1, wherein calculating the visual fatigue index and visual focus of the target user comprises: Extracting eye movement characteristics and facial characteristics of the high-dimensional perception characteristic flow corresponding to the target user to construct a face three-dimensional model of the target user, analyzing the head gesture of the target user according to the face three-dimensional model, and determining the visual focus of the target user based on the head gesture; Based on the face three-dimensional model, calculating a blink behavior index, a pupil and eye movement index and a head posture index of the target user to determine a visual fatigue index of the target user.
  3. 3. The aided learning method based on the AI multi-modal interactive screen-projection eye-protection desk lamp of claim 2, wherein constructing the face three-dimensional model of the target user comprises: constructing a basic facial three-dimensional model of the target user, and determining facial feature points of the target user based on facial features of the target user so as to map model corresponding points of the facial three-dimensional model; Based on the facial feature points and the model corresponding points, determining a rotation matrix and a translation vector of the facial three-dimensional model, and calculating attitude parameters and shape parameters of the facial three-dimensional model according to the rotation matrix and the translation vector so as to map the facial three-dimensional model to a two-dimensional image space to obtain mapped feature points; and calculating errors of the facial feature points and the mapped feature points to optimize the basic three-dimensional model, so as to obtain a facial three-dimensional model.
  4. 4. The AI-multimodal interactive screen-projection eye-protection table lamp-based aided learning method of claim 2, wherein determining the visual focus of the target user based on the head pose comprises: determining a head center axis and an eye center axis of the head pose according to the head pose; And calculating a head direction vector of the head central axis through the rotation matrix and the initial direction vector of the head gesture, determining an eye direction vector of the eye central axis according to the pupil center of the head gesture, and calculating the center axis distance between the head central axis and the eye central axis so as to calculate the visual focus of the target user.
  5. 5. The AI-multimodal interactive screen-projection eye-protection table lamp-based aided learning method of claim 2, wherein calculating the blink behavior index, the pupil and eye movement index, and the head posture index of the target user based on the face three-dimensional model comprises: Calculating the blink frequency, the blink duration and the PERCLOS of the target user based on the eye feature sequence of the face three-dimensional model so as to determine the blink behavior index of the target user; Calculating pupil diameter change, eye jump speed, eye jump amplitude and eye point dispersity of the target user so as to calculate pupil and eye movement indexes of the target user; and calculating the head nodding frequency and the head posture stability of the target user according to the head posture sequence of the face three-dimensional model so as to determine the head posture index of the target user.
  6. 6. The auxiliary learning method based on the AI multi-modal interactive screen-casting eye-protection table lamp according to claim 5, wherein calculating the gaze point dispersion of the target user comprises: Generating a 3D fixation point cloud of the target user based on the eye feature sequence so as to calculate the space probability density of the fixation point corresponding to the target user in a 3D space; And analyzing the information entropy of the fixation point based on the space probability density to determine the dispersion degree of the fixation point of the target user.
  7. 7. The aided learning method based on the AI multi-modal interactive screen-throwing eye-protecting desk lamp of claim 1, wherein integrating the full-dimensional environment perception matrix of the desk lamp comprises: And integrating the central main control unit and the multidimensional subspace unit of the desk lamp, defining a physical layout diagram and a communication protocol between the central main control unit and the multidimensional subspace unit, and integrating the full-dimensional environment perception matrix of the desk lamp based on the physical layout diagram and the communication protocol.
  8. 8. The auxiliary learning method based on the AI multi-modal interactive screen-throwing eye-protecting desk lamp according to claim 1, wherein analyzing the user portrait information of the target user based on the high-dimensional perception feature stream comprises: extracting high-level feature vectors of the high-dimensional perception feature stream, and analyzing concentration tolerance, knowledge weak point patterns and learning preference of the target user according to the high-level feature vectors; And constructing user portrait information of the target user based on the concentration tolerance, the knowledge weak point map and the learning preference.
  9. 9. The aided learning method based on the AI multi-modal interactive screen-throwing eye-protecting desk lamp of claim 1, wherein constructing a bidirectional interactive channel between the target user and the desk lamp comprises: the output mode of the table lamp and the input mode of the target user are clarified; generating a three-layer intention negotiation protocol of the table lamp according to the output mode and the input mode, wherein the three-layer intention negotiation protocol comprises an implicit intention proposal layer, an explicit confirmation and correction layer and a negotiation learning layer; And constructing a bidirectional interaction channel between the target user and the desk lamp based on the three-layer intention negotiation protocol.
  10. 10. Auxiliary learning system based on AI multimode is thrown screen and is protected eye desk lamp alternately, its characterized in that, the system includes: The high-dimensional data acquisition module is used for integrating the full-dimensional environment sensing matrix of the desk lamp so as to acquire the high-dimensional sensing characteristic flow of the target user in the learning process in real time; The learning environment construction module is used for analyzing user portrait information of the target user based on the high-dimensional perception feature stream to determine potential requirements of the target user, calculating visual fatigue indexes and visual focuses of the target user to analyze current learning states of the target user, and constructing an auxiliary learning environment of the target user based on the potential requirements and the current learning states; and the auxiliary learning execution module is used for constructing a bidirectional interaction channel between the target user and the desk lamp, and optimizing learning environment characteristics of the auxiliary learning environment based on the bidirectional interaction channel so as to execute auxiliary learning of the desk lamp.

Description

Auxiliary learning method and system based on AI multi-mode interaction screen-throwing eye-protection desk lamp Technical Field The invention relates to an auxiliary learning method and system based on an AI multi-mode interaction screen-throwing eye-protection desk lamp, and belongs to the technical field of multi-mode interaction. Background The auxiliary learning of the desk lamp is to use the desk lamp as an intelligent terminal, surpass the basic function of lighting of the desk lamp, and create a more efficient, healthier and immersive learning environment through active sensing, intelligent analysis and personalized interaction, thereby optimizing the learning process and improving the learning effect. The desk lamp auxiliary study actively intervenes before distraction of a user through monitoring concentration degree to maintain a high-efficiency study state, when the user encounters confusion, relevant auxiliary information is projected beside a book in real time to reduce interruption, meanwhile, the desk lamp auxiliary study can monitor and correct bad sitting postures and eye use distances in real time, actively reminds of resting according to fatigue degree, and therefore effective health management is carried out. The traditional desk lamp auxiliary learning provides a static or semi-static standardized physical environment support for learning through a preset and fixed hardware function, such as a light control function of the traditional Pulse-width modulation (Pulse-WidthModulation, PWM) technology under the desk lamp auxiliary learning to realize dimming and color mixing of light, and the single distance sensor detects the distance between the gesture of a user and a book in the learning process to realize sitting posture monitoring of the user, so that the auxiliary learning of the user is realized. Disclosure of Invention The invention provides an auxiliary learning method and system based on an AI multi-mode interactive screen-throwing eye-protection desk lamp, and mainly aims to improve adaptability and comprehensiveness of auxiliary learning of the desk lamp. In order to achieve the above purpose, the auxiliary learning method based on the AI multi-mode interactive screen-throwing eye-protection desk lamp provided by the invention comprises the following steps: integrating the full-dimensional environment sensing matrix of the desk lamp so as to acquire the high-dimensional sensing characteristic flow of the target user in the learning process in real time; Analyzing user portrait information of the target user based on the high-dimensional perception feature stream to determine potential requirements of the target user, calculating visual fatigue indexes and visual focuses of the target user to analyze current learning states of the target user, and constructing an auxiliary learning environment of the target user based on the potential requirements and the current learning states; and constructing a bidirectional interaction channel between the target user and the desk lamp, and optimizing learning environment characteristics of the auxiliary learning environment based on the bidirectional interaction channel so as to execute auxiliary learning of the desk lamp. Optionally, calculating the visual fatigue index and visual focus of the target user includes: Extracting eye movement characteristics and facial characteristics of the high-dimensional perception characteristic flow corresponding to the target user to construct a face three-dimensional model of the target user, analyzing the head gesture of the target user according to the face three-dimensional model, and determining the visual focus of the target user based on the head gesture; Based on the face three-dimensional model, calculating a blink behavior index, a pupil and eye movement index and a head posture index of the target user to determine a visual fatigue index of the target user. Optionally, constructing the three-dimensional model of the face of the target user includes: constructing a basic facial three-dimensional model of the target user, and determining facial feature points of the target user based on facial features of the target user so as to map model corresponding points of the facial three-dimensional model; Based on the facial feature points and the model corresponding points, determining a rotation matrix and a translation vector of the facial three-dimensional model, and calculating attitude parameters and shape parameters of the facial three-dimensional model according to the rotation matrix and the translation vector so as to map the facial three-dimensional model to a two-dimensional image space to obtain mapped feature points; and calculating errors of the facial feature points and the mapped feature points to optimize the basic three-dimensional model, so as to obtain a facial three-dimensional model. Optionally, determining the visual focus of the target user based on the head pose includes: det