Search

CN-121978837-A - Cognitive enhancement type AR intelligent glasses system with brain-eye coordination and no language interaction

CN121978837ACN 121978837 ACN121978837 ACN 121978837ACN-121978837-A

Abstract

The invention discloses a brain-eye cooperative non-language interaction cognitive enhancement type AR intelligent glasses system, and belongs to the technical fields of intelligent wearing equipment, augmented reality technology and non-invasive brain-computer interfaces. Aiming at the industrial pain points with high risk of visual injury, complicated interaction mode, large single-mode intention recognition error, heavy product form and no cognition enhancement core capacity of the existing AR equipment, the system realizes full-spectrum environment light modulation zero-injury eye display, eye visual frequency regulation and enhancement, brain wave cooperative nerve regulation and control, eye movement-brain electricity double locking no language interaction and full-link closed-loop coupling of cognition evolution closed-loop training. The system improves the intention recognition accuracy to more than 95% through double check of eye movement space locking and electroencephalogram intention qualitative, reduces the false touch rate to less than 0.1%, can realize contactless and language-free conscious level interaction, can expand human eye visual boundaries and enhance cognitive ability, and the whole machine adopts a miniaturized integrated design and human self-powered scheme, and wearing experience is consistent with that of common glasses. The system can be widely applied to a plurality of fields such as consumer electronics, medical rehabilitation, industrial assistance, cognitive training and the like.

Inventors

  • ZHANG HENG

Assignees

  • 张恒

Dates

Publication Date
20260505
Application Date
20260315

Claims (13)

  1. 1. The cognitive enhancement type AR intelligent glasses system is characterized by comprising a glasses frame main body, a transparent optical waveguide lens module capable of customizing refractive power, a full spectrum environment light modulation type transparent display module, an eye movement tracking and visual frequency regulation module, an electroencephalogram acquisition and nerve regulation module, a bimodal double locking interaction control module, a non-language brain electrolysis code artificial intelligent interaction module, an end side artificial intelligent edge calculation module and a biological level self-power supply module; the full spectrum environment light modulation type transparent display module is integrated in the optical waveguide lens and is used for realizing backlight-free imaging on the transparent lens by modulating environment natural light, the eye tracking and visual frequency regulating module is integrated at the front end of the lens frame and is used for collecting eye movement signals and pupil change signals of a user and outputting microamper-level eye through-eye electric stimulation signals matched with natural frequencies of retinal ganglion cells, the electroencephalogram acquisition and nerve regulating module is integrated at the inner side of the lens frame and is used for collecting brain wave signals of forehead leaves and visual cortex of the user in real time and outputting transcranial alternating current stimulation signals matched with visual stimulation frequencies in a cooperative manner, the bimodal double-locking interaction control module is electrically connected with the eye tracking and visual frequency regulating module and the electroencephalogram acquisition and nerve regulating module and is used for realizing non-contact equipment control through the double rules of eye space locking and electroencephalogram intention verification, the non-language electroencephalogram artificial intelligent interaction module is electrically connected with the electroencephalogram acquisition and nerve regulating module and the bimodal double-locking interaction control module and is used for decoding the semantic characteristics of the user, the end-side artificial intelligent edge computing module and the biological-level self-power supply module are integrated in the mirror frame and used for realizing cooperative control and power supply of the whole system.
  2. 2. The system of claim 1, wherein the full spectrum ambient light modulation transparent display module comprises a double-layer nanostructured holographic optical waveguide lens, wherein the inner layer of the lens is an ambient light modulation imaging layer for modulating ambient natural light through electrochromic nanostructures to realize image imaging, the working power consumption is in the micro watt level, and the outer layer of the lens is a full spectrum adaptation layer for matching the spectral distribution, color temperature and brightness of external natural light in real time so as to enable the imaging image to be consistent with the visual impression of a real scene.
  3. 3. The system of claim 2, wherein the full spectrum ambient light modulation type transparent display module further comprises a convergence-adjustment-brain wave three-synchronous light field rendering unit for dynamically rendering a light field picture based on real-time feedback of an eyeball tracking signal and brain waves, so that three of a human eye focusing focal length, a picture virtual focal length and a brain visual cortex response frequency are completely synchronous.
  4. 4. The system of claim 1, wherein the eye tracking and visual frequency regulating module comprises a low-power consumption infrared eye tracking unit, a wide-spectrum sensing unit and an ocular micro-electro-stimulation unit, wherein the sampling rate is not lower than 100Hz, the low-power consumption infrared eye tracking unit is used for collecting eyeball focusing position, residence time, pupil scaling amplitude and eyeball rotating track signals of a user, the wide-spectrum sensing unit is used for collecting near infrared and short wave ultraviolet band rays invisible to human eyes and converting the near infrared and short wave ultraviolet band rays into picture signals matched with the visible light band of the human eyes and transmitting the picture signals to the display module, and the ocular micro-electro-stimulation unit is used for outputting microampere sine wave current matched with the natural discharge frequency of retinal ganglion cells by 10-80Hz, regulating response frequency of a visual system and improving sensitivity of retinal photoreceptor cells.
  5. 5. The system of claim 1, wherein the electroencephalogram acquisition and nerve regulation module comprises a flexible nano-gold electroencephalogram acquisition electrode and a transcranial alternating current stimulation unit, wherein the flexible nano-gold electroencephalogram acquisition electrode is attached to the inner side of a glasses leg and corresponds to the forehead and temporal lobes of a user and is used for acquiring brain wave signals of the user in real time, the signal sampling rate is not lower than 250Hz, and the transcranial alternating current stimulation unit is used for outputting alternating current which is cooperatively synchronous with visual stimulation frequency and accurately regulating and controlling rhythms of alpha waves, beta waves and gamma waves of the brain of the user so as to achieve corresponding effects of relaxation, concentration and cognition enhancement.
  6. 6. The system of claim 1, wherein the bimodal double-locking interaction control module is internally provided with a fixed threshold double-verification rule, specifically comprising the steps of eye movement effective locking judgment, stable focusing of a user eyeball in a target area for 100ms-500ms, matching of pupil focusing depth with a target plane, elimination of blink and unconscious rotation invalid signals, judgment of effective space positioning, electroencephalogram effective intention judgment, synchronous acquisition of user electroencephalogram signals, judgment of effective operation intention, judgment of confidence level of decoding user operation intention being more than or equal to 90%, instruction execution rule, and corresponding operation execution of the system is only triggered by the effective space positioning and the effective operation intention at the same time, and when time difference is less than or equal to 100ms, and signal omission is caused if any condition is not met.
  7. 7. The system of claim 6, wherein the dual-mode dual-locking interaction control module further comprises a scene adaptive dynamic weight adjustment unit, wherein the scene adaptive dynamic weight adjustment unit is used for automatically adjusting the identification weights of the eye movement signals and the electroencephalogram signals according to the current use scene of the user, the weight of the electroencephalogram signals is improved by using scenes occupied by two hands, and the weight of the eye movement signals is improved by using a fatigue/noisy environment.
  8. 8. The system of claim 1, wherein the language-free brain electrolysis code artificial intelligent interaction module comprises a personalized brain electricity semantic dictionary self-learning unit and a scene memory extraction auxiliary unit, wherein the personalized brain electricity semantic dictionary self-learning unit is used for establishing an brain electricity characteristic-semantic intention mapping dictionary specific to a user through unsupervised learning and decoding a silent demand instruction of the user, and the scene memory extraction auxiliary unit is used for encrypting and storing travel tracks, visual pictures and environment information of the user at a local end side and automatically calling and displaying address, coordinates and picture information of a corresponding scene when the brain electricity characteristic is extracted by decoding the memory of the user.
  9. 9. The system of claim 1, wherein the artificial intelligence interaction module for the language-free brain electrolysis code further comprises a recessive social perception enhancement unit for collecting micro expression, pupil change, limb micro action and skin temperature signals of the interaction object through the micro wide-angle camera and the infrared sensing chip and analyzing emotion tendency and intention credibility through the artificial intelligence model.
  10. 10. The system according to claim 9, wherein the implicit social perception enhancement unit is further electrically connected to the electroencephalogram acquisition and neuromodulation module for intuitively enhancing feedback of the analysis result to the user via the transcranial alternating current stimulation unit.
  11. 11. The system of claim 1, wherein the biological self-powered module comprises a flexible thermoelectric power generation sheet integrated at a frame and a glasses leg of a glasses frame, a piezoelectric power generation sheet integrated at a hinge of the glasses frame, a matched energy management circuit and a miniature super capacitor, and is used for collecting energy in real time through temperature and environmental temperature differences of a human body and actions of limbs and heads of the user to supply power to the whole system, and working current of the whole machine is matched with biological electric quantity level of the human body.
  12. 12. The system of claim 1, further comprising a cognitive evolution closed-loop training module configured to stepwise enhance the concentration, memory, information processing speed and perception capability of a user through an interactive-regulation-feedback-upgrade closed-loop system, and to achieve continuous enhancement of the cognitive capability of the user based on the principle of neural plasticity.
  13. 13. The system of claim 1, wherein the frame body is a fully stacked integrated architecture, the artificial intelligent edge computing module, the sensing module and the driving circuit are all integrated at the front ends of the nose bridge frame and the glasses legs of the frame, the weight of the whole machine is less than or equal to 30g, and the appearance of the whole machine is consistent with that of common plate glasses.

Description

Cognitive enhancement type AR intelligent glasses system with brain-eye coordination and no language interaction Technical Field The invention belongs to the technical field of intelligent wearing equipment, augmented reality technology and non-invasive brain-computer interfaces, and particularly relates to a cognitive augmented AR intelligent glasses system with brain-eye cooperation and no language interaction. Background With the development of smart wear technology, AR smart glasses are recognized as a mainstream terminal product for next generation substitution of smartphones. The existing mainstream primary AR devices, such as microsoft holonens 2, MAGIC LEAP 2, thunderbird AR, rokid AR, and the like, and VR head displays with AR perspective functions, such as apple Vision Pro, meta Quest series, and the like, still have the following core pain points that cannot be solved: 1. The vision damage risk is high, the existing AR equipment mostly adopts an actively-luminous Micro-OLED/Micro-LED screen, has the problems of convergence adjustment conflict, blue light hazard, strong light visual fatigue and the like, can seriously damage the vision of a user after being worn for a long time, and can not realize full spectrum adaptation of an imaging picture and environment natural light, has strong visual fracture feeling, and further aggravates the visual fatigue. 2. The interaction mode is complicated, the existing equipment still depends on the interaction modes such as handle touch control, voice instructions, gesture recognition and the like, a user is required to adapt the equipment deliberately, no sensory interaction conforming to natural cognition habits of human beings can be realized, and the existing interaction mode can not meet the use requirements completely under the scenes such as double-hand occupation, noisy environments and the like. 3. The method is characterized in that the method comprises the steps of carrying out a simple eye movement interaction or electroencephalogram interaction, wherein the simple eye movement interaction is high in error touch rate, the simple eye movement interaction is low in signal to noise ratio, the complex semantic decoding error is high, the use requirement of consumer products cannot be met, and the scheme of combining a few eye movements and electroencephalograms is only simple splicing of two modules, synchronous coupling and double verification of signal levels are not achieved, so that the problems of error and error touch cannot be solved from the root. 4. The product is heavy in shape and poor in wearing experience, the existing AR equipment is designed by adopting an externally-hung battery and a large-size glasses frame, the weight of the existing AR equipment is generally more than 300g, the weight of the lightest consumer-grade primary AR glasses is also more than 50g, wearing discomfort is strong, daily all-weather wearing cannot be realized, the existing equipment is powered by a large-capacity lithium battery, the endurance time is short, and long-term use requirements cannot be met. 5. The function positioning is single, the core-free value-added barrier is that the existing equipment is only used as a screen-throwing display terminal, only translation of the functions of a mobile phone is realized, enhancement of human vision and cognitive ability cannot be realized, core competitiveness different from that of the mobile phone cannot be formed, the existing non-invasive brain-computer interface equipment is mostly single head-mounted equipment, closed loop coupling cannot be formed with an AR display system, a complete link of visual stimulation-nerve regulation-interaction feedback-capability enhancement cannot be realized, and the system is difficult to fall to a consumer scene. Therefore, the AR intelligent glasses system which has the advantages of zero eye injury, natural interaction, accurate intention recognition, wearing experience consistent with that of common glasses and cognitive enhancement capability is developed, and is a core problem to be solved urgently in the industry, and is also a core development direction of next-generation intelligent terminals. Disclosure of Invention 1. Technical problem to be solved The invention aims to overcome the defects of the prior art, and provides a brain-eye cooperative cognitive enhancement type AR intelligent glasses system without language interaction, which solves the core pain points with large intention recognition error, heavy shape and no cognitive enhancement capability of the existing AR equipment and simultaneously realizes the core requirements of conscious-level interaction, visual boundary expansion and cognitive capability improvement. 2. Technical proposal In order to achieve the above purpose, the invention adopts the following technical scheme: a brain-eye cooperative language-interaction-free cognitive enhancement type AR intelligent glasses system comprises a glasses fr