CN-122023735-A - Synthetic method of generated type AI training scene for vestibular rehabilitation
Abstract
A synthetic method of a generated type AI training scene for vestibular rehabilitation. The invention belongs to the technical field of medical artificial intelligence, and the method comprises the steps of obtaining characteristics of vestibular functional defects of a subject, wherein the characteristics comprise a cold and hot test asymmetry ratio, vHIT gain values or spontaneous eye shock intensity, encoding the characteristics into scene control vectors, wherein the scene control vectors comprise visual movement directions, angular velocity amplitude values, acceleration change rates and space complexity, wherein the angular velocity amplitude values are dynamically calculated according to patient compensation thresholds and functional retention rates and cover 60-90%, inputting the vectors into a pre-trained conditional diffusion model or a conditional generation countermeasure network (cGAN) or other generation type AI model to generate customized VR scenes, and rendering the scenes to a head-mounted display device for rehabilitation training. The invention realizes 'one person one scene and accurate stimulation', solves the problems of fixed, uncontrollable stimulation, high development cost and the like of the existing VR rehabilitation scene, and improves the safety and effectiveness of training.
Inventors
- XIAO WEIMIN
- WANG JUN
- ZHANG RUIGUANG
Assignees
- 西安宏毅科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251225
Claims (8)
- 1. The synthetic method of the generated type AI training scene for vestibular rehabilitation is characterized by comprising the following steps: acquiring vestibular functional deficit characteristics of a subject, wherein the deficit characteristics comprise functional state indexes of at least one semicircular canal, and the functional state indexes are selected from at least one of slow phase velocity asymmetry ratio, video head pulse gain value or spontaneous eye vibration intensity of a cold and hot test; Encoding the defect characteristics into a scene control vector, wherein the scene control vector comprises four dimensions of a visual motion direction, an angular velocity amplitude, an acceleration change rate and space complexity, and the angular velocity amplitude is dynamically calculated according to the functional state index and a patient compensation threshold value and is mapped to a 60-90% interval of the compensation threshold value; Inputting the scene control vector into a pre-trained generative AI model, the generative AI model being a conditional diffusion model or a conditional generation countermeasure network (cGAN) configured to receive the scene control vector and generate a three-dimensional VR scene conforming to a physical law and having an immersive sensation; Outputting customized VR scene data by the generated AI model, wherein the VR scene data comprises scene geometry, texture mapping, dynamic object track and illumination parameters, and rendering the scene geometry, the texture mapping, the dynamic object track and the illumination parameters to a head-mounted display device in real time for rehabilitation training of a subject; the method further includes a physiological signal based safety monitoring mechanism for adjusting or pausing the training scenario when an abnormal physiological response is detected.
- 2. The method of claim 1, wherein the compensation threshold is determined by a maximum tolerated angular velocity in historical training data or dynamically calibrated by an initial fitness test.
- 3. The method of claim 1, wherein the generated AI model uses a labeling dataset during a training phase, the labeling dataset comprising vestibular defect labels, corresponding safe VR scene samples, and clinician scores for scene stimulus intensity.
- 4. The method of claim 1, wherein the direction of visual movement is determined based on anatomical locations of hypofunction semicircular canals for providing visual motor stimulation in a direction opposite to the defective side to induce vestibular compensation.
- 5. The method of claim 1, wherein the angular velocity magnitude is calculated as: ω target =ω max ×(0.6+0.3×R); Wherein ω max is the maximum tolerated angular velocity determined by the initial fitness test, R is the functional retention, defined as: R=G Damaged side /G Health care side 。
- 6. The method of claim 1, wherein the spatial complexity in the scene control vector is determined from spontaneous eye-shake intensity Vsp: if Vsp >2 °/s, then set to "low" (no dynamic object); otherwise set to "medium" (containing a small number of moving objects).
- 7. The method of claim 1, wherein the training of the generated AI model uses an loss function: L=λ1·Lrecon+λ2·Lperceptual; Where λ1=1.0, λ2=0.5, lrecon is reconstruction loss, lperceptual is perception loss.
- 8. The method of claim 1, wherein the safety monitoring mechanism comprises pausing the current scene and switching to a still screen when a subject eye closure time is detected that exceeds 40% or a heart rate rises by more than 20%.
Description
Synthetic method of generated type AI training scene for vestibular rehabilitation Technical Field The invention relates to the crossing field of medical rehabilitation technology and artificial intelligence, in particular to a Virtual Reality (VR) scene generation method for rehabilitation training of patients with vestibular dysfunction, and especially relates to a method for dynamically synthesizing a safe and effective VR training scene according to individual vestibular defect characteristics of patients based on a generated artificial intelligence model. Background The vestibular system is the core organ that maintains balance and spatial orientation of the human body. The vestibular function is impaired (such as benign paroxysmal positional vertigo BPPV, vestibular neuritis, meniere disease and the like), and the patients often have symptoms such as vertigo, instability, nausea and the like, and seriously influence the quality of life. Vestibular rehabilitation (Vestibular Rehabilitation Therapy, VRT) promotes the compensatory mechanisms of the central nervous system by repeated exposure to specific sensory conflict stimuli, an internationally recognized non-pharmaceutical intervention. In recent years, virtual Reality (VR) technology has been widely used for vestibular rehabilitation due to its immersive, controllable and safe nature. For example, the platform provided by Neuro Rehab VR in the united states contains more than ten preset scenes such as wire walking, supermarket shopping, etc., and the balance training game developed based on the Unity engine is introduced by the China part of hospitals. However, such systems suffer from the following significant drawbacks: the scene content is fixed and limited, namely, all patients can not customize the vestibular defect mode (such as low left horizontal semicircular canal function and bilateral vestibular diseases) of the individual by using the same scene library; The stimulation parameters are uncontrollable, wherein key parameters such as visual movement speed, direction, acceleration and the like are mainly preset values or coarse granularity adjustment (such as 'low/medium/high' three-gear), the compensation threshold of a patient is difficult to match accurately, nausea and vomiting are easy to cause due to too strong stimulation, and nerve plasticity cannot be activated effectively due to too weak stimulation; the development cost is high, each new scene needs artist modeling, programmer coding and clinician verification, the period is long, the cost is high, and the new scene is difficult to continuously update; The lack of physiological closed loop is that the existing system does not take vestibular function examination results (such as cold and hot tests and video head pulse tests vHIT) as input basis for scene generation, so that the diagnosis and rehabilitation are disjointed. Although some studies attempt to adjust scene difficulty via a rules engine (e.g., to speed up background movement based on center of gravity shifting), they are still limited by the fixed structure of the underlying scene and fail to generate completely new spatial layout or motion logic. In recent years, generative artificial intelligence (GENERATIVE AI) has made breakthroughs in the field of image, 3D scene synthesis, and Diffusion models (Diffusion models) and generation of countermeasure networks (GAN) have enabled the generation of highly realistic and diverse visual content. However, no publication or patent has been made to apply the generated AI to a specific medical scenario of vestibular rehabilitation, and in particular, no mapping mechanism of "vestibular defect feature→vr scene parameters" has been established. Therefore, a technical scheme capable of automatically generating a VR training environment with physiological adaptation, safety, controllability and infinite diversity according to the vestibular function evaluation result of a patient is needed, so as to improve rehabilitation efficiency and reduce content development cost. Disclosure of Invention The invention aims to solve the problems of fixed VR (virtual reality) rehabilitation scenes, uncontrollable stimulation, high development cost and the like in the prior art, and provides a generated AI (artificial intelligence) training scene synthesis method for vestibular rehabilitation, so as to realize intelligent rehabilitation training with 'one person, one scene, accurate stimulation, safety and effectiveness'. The technical scheme is as follows: In order to achieve the above purpose, the present invention provides the following technical solutions: A method for synthesizing a generated AI training scene for vestibular rehabilitation, comprising the following steps: Firstly, the vestibular function defect characteristic of a subject is obtained, wherein the defect characteristic comprises at least one function state index of a semicircular canal, and the function state index is selected f