Search

CN-121999974-A - VR digital virtual person-based training system for assisting in aphasia treatment

CN121999974ACN 121999974 ACN121999974 ACN 121999974ACN-121999974-A

Abstract

The invention relates to the technical field of virtual reality, in particular to an auxiliary treatment aphasia training system based on VR digital virtual persons, which comprises the following components: and the sight line coordinate smooth mapping module is used for acquiring the eyeball rotation angle and the pupil center position according to an eye movement sensor of the VR head-mounted display device and generating a VR gazing point coordinate sequence. In the invention, when the score is lower than the limit, the real coordinate mapping is adopted to reserve motion detail auxiliary error correction, and when the score is higher than the limit, the smooth spline interpolation is utilized to generate an idealized driving path, and the self-adaptive correction mechanism can provide real motion feedback and enhance training confidence through visual optimization. The virtual hand is driven to contact the prop along the set path and synchronously trigger the digital population animation and the highlight prompt, so that an interactive feedback loop of audio-visual touch multidimensional sensory fusion is constructed, the nerve connection between the action intention of the patient and the language expression is enhanced, and the immersion sense of rehabilitation training is improved.

Inventors

  • YU GUOHUA
  • FANG JING
  • Shuai Lang
  • ZHANG GUODONG
  • XIONG JIE

Assignees

  • 南昌大学第一附属医院

Dates

Publication Date
20260508
Application Date
20260126

Claims (8)

  1. 1. An auxiliary aphasia treatment training system based on VR digital dummy, the system comprising: The visual line coordinate smooth mapping module is used for acquiring the eyeball rotation angle and the pupil center position according to an eye movement sensor of the VR head-mounted display device, generating a VR gazing point coordinate sequence, taking the current view angle position of a user as an origin, establishing three-dimensional ray vectors passing through each point in the VR gazing point coordinate sequence, and generating a virtual scene collision vector set; The intention focus residence calculation module is used for counting the number of collision points falling into a digital human mouth grid area and a training prop geometric range in unit time according to the virtual scene collision vector set, calculating a sight residence density distribution value, and calculating an expression intention confidence index based on the sight residence density distribution value; The motion smoothness analysis module is used for generating hand motion jerk parameters according to hand position tracking started when the confidence coefficient index of the expression intention exceeds a preset trigger limit value, and calculating a motion smoothness scoring coefficient according to the hand motion jerk parameters; And the virtual interaction path correction module is used for adopting absolute mapping of real hand coordinates according to the action smoothness scoring coefficient, extracting action starting point and prediction end point coordinates when the action smoothness scoring coefficient is lower than a set limit, calculating a smooth spline interpolation curve point between the starting point and the end point, generating virtual hand driving path coordinates, driving a virtual hand to contact training props along the virtual hand driving path coordinates, and generating a digital human interaction feedback instruction.
  2. 2. The VR digital virtual human-based assisted therapy aphasia training system of claim 1, wherein the virtual scene collision vector set obtaining step is: Collecting eyeball rotation angles and pupil center positions according to an eye movement sensor of the VR head-mounted display device, sorting original coordinate data according to a time sequence of a continuous sampling window, and performing weighted moving average smoothing on the original coordinate data to generate a VR fixation point coordinate sequence; according to the VR fixation point coordinate sequence, the current view angle position of the user is taken as an origin, a space direction pointing to the coordinates from the origin is calculated for each coordinate in the VR fixation point coordinate sequence, the space direction is extended into rays along a parameter distance, and a three-dimensional ray vector set is generated.
  3. 3. The VR digital virtual person based assisted aphasia training system as set forth in claim 2, wherein the virtual scene collision vector set obtaining step further comprises: And according to the three-dimensional ray vector set, intersecting judgment is carried out on the digital human face area in the virtual scene and the grid surface of the training prop model by each three-dimensional ray, the coordinate of the nearest intersecting point of each three-dimensional ray and the model grid is extracted, and a virtual scene collision vector set is generated.
  4. 4. The VR digital virtual human-based aphasia training system as set forth in claim 1, wherein the line-of-sight resident density distribution value obtaining step is: Setting a unit time boundary according to the virtual scene collision vector set, reading a time stamp and intersecting coordinate information for each collision vector, judging whether the intersecting coordinates are positioned in a digital human mouth grid area or a training prop geometric range, classifying and counting collision points meeting the conditions according to the area, and generating a summary of the number of collision points in unit time; And according to the summary of the number of collision points in unit time, calculating the surface area of the grid area of the digital human mouth and the geometric surface area of the training prop, dividing the number of collision points in unit time in each type of area by the corresponding surface area to obtain the number of unit area, arranging according to time sequence, and reserving the region classification marks to form the sight residence density distribution value.
  5. 5. The VR digital virtual human-based assisted aphasia training system according to claim 1, wherein the step of obtaining the hand motion jerk parameter is: And when the expression intention confidence index exceeds a preset trigger limit value, starting hand position tracking, collecting real-time displacement coordinates of the hand in a virtual space, continuously sequencing the displacement coordinates according to a time stamp, calculating three differential values of the displacement coordinates with respect to time, storing the three differential values of each time point, and generating hand motion jerk parameters.
  6. 6. The VR digital virtual human-based aphasia training system as set forth in claim 1, wherein the step of obtaining the motion smoothness scoring coefficients is: Determining the starting time and the ending time of the current action execution period according to the hand motion jerk parameter, calculating the square root of the square average value of the hand motion jerk parameter in the starting time and ending time interval to obtain an average shaking intensity value, and extracting the maximum value of the mode of the hand motion jerk parameter in the time interval as an instantaneous shaking peak value to form the average shaking intensity value and the instantaneous shaking peak value; And calculating a motion smoothness scoring coefficient according to the average jitter intensity value and the instantaneous jitter peak value.
  7. 7. The VR digital virtual human-based aphasia training system according to claim 1, wherein the virtual hand driving path coordinates acquiring step is: According to the action smoothness scoring coefficient, threshold comparison is carried out according to a set limit, when the action smoothness scoring coefficient is lower than the set limit, absolute mapping of real hand coordinates is recorded and a direct mode is marked, when the action smoothness scoring coefficient is higher than the set limit, action starting point and prediction end point coordinates are extracted and an interpolation mode is marked, and a path generation mode and an input coordinate set are generated; According to the path generation mode and the input coordinate set, real hand coordinates are sequentially taken as track points according to time stamps in a direct mode, smooth spline interpolation curve points are calculated between action starting point and prediction end point coordinates in an interpolation mode, and the smooth spline interpolation curve points are sampled according to parameter sequences, so that virtual hand driving path coordinates are generated.
  8. 8. The VR digital virtual human-based assisted therapy aphasia training system according to claim 1, wherein the step of obtaining the digital human-interactive feedback instruction is: And driving the virtual hand to sequentially move to the surface contact point of the training prop according to the virtual hand driving path coordinates, recording the contact event time stamp, reading the training prop identification, matching the digital population type animation code and the highlight prompt signal code, combining the action execution instruction field, the animation triggering field and the highlight triggering field, and generating the digital human interaction feedback instruction.

Description

VR digital virtual person-based training system for assisting in aphasia treatment Technical Field The invention relates to the technical field of virtual reality, in particular to an auxiliary treatment aphasia training system based on VR digital virtual persons. Background The virtual reality technology integrates multiple disciplines such as computer graphics, man-machine interaction technology, sensor technology, three-dimensional display technology, artificial intelligence and the like, and aims to generate a simulation environment with vivid vision, hearing, touch and even smell through a computer. The existing virtual reality technology generally adopts a direct mapping mode of original data in man-machine interaction application, namely directly converting the sight line direction or the controller position acquired by a sensor into an operation cursor or object coordinate in a virtual space. In the actual operation of the rehabilitation scene aiming at the crowd with the damaged nervous system, the interaction mechanism lacking the intention filtering can not shield the common pathological nystagmus or unintentional glance of the patient, so that the sight frequently drifts or jumps in the virtual scene, the misjudgment of the system on a non-target area or the unexpected triggering of the interaction instruction when the system is not ready is easy to cause, and the continuity of the training process is interfered. Therefore, improvements are needed. Disclosure of Invention The invention aims to solve the defects in the prior art, and provides an auxiliary aphasia treatment training system based on VR digital virtual people. In order to achieve the purpose, the invention adopts the following technical scheme that the training system for assisting in treating aphasia based on VR digital virtual man comprises: The visual line coordinate smooth mapping module is used for acquiring the eyeball rotation angle and the pupil center position according to an eye movement sensor of the VR head-mounted display device, generating a VR gazing point coordinate sequence, taking the current view angle position of a user as an origin, establishing three-dimensional ray vectors passing through each point in the VR gazing point coordinate sequence, and generating a virtual scene collision vector set; The intention focus residence calculation module is used for counting the number of collision points falling into a digital human mouth grid area and a training prop geometric range in unit time according to the virtual scene collision vector set, calculating a sight residence density distribution value, and calculating an expression intention confidence index based on the sight residence density distribution value; The motion smoothness analysis module is used for generating hand motion jerk parameters according to hand position tracking started when the confidence coefficient index of the expression intention exceeds a preset trigger limit value, and calculating a motion smoothness scoring coefficient according to the hand motion jerk parameters; And the virtual interaction path correction module is used for adopting absolute mapping of real hand coordinates according to the action smoothness scoring coefficient, extracting action starting point and prediction end point coordinates when the action smoothness scoring coefficient is lower than a set limit, calculating a smooth spline interpolation curve point between the starting point and the end point, generating virtual hand driving path coordinates, driving a virtual hand to contact training props along the virtual hand driving path coordinates, and generating a digital human interaction feedback instruction. Preferably, the step of obtaining the virtual scene collision vector set includes: Collecting eyeball rotation angles and pupil center positions according to an eye movement sensor of the VR head-mounted display device, sorting original coordinate data according to a time sequence of a continuous sampling window, and performing weighted moving average smoothing on the original coordinate data to generate a VR fixation point coordinate sequence; according to the VR fixation point coordinate sequence, the current view angle position of the user is taken as an origin, a space direction pointing to the coordinates from the origin is calculated for each coordinate in the VR fixation point coordinate sequence, the space direction is extended into rays along a parameter distance, and a three-dimensional ray vector set is generated. Preferably, the step of obtaining the virtual scene collision vector set further includes: And according to the three-dimensional ray vector set, intersecting judgment is carried out on the digital human face area in the virtual scene and the grid surface of the training prop model by each three-dimensional ray, the coordinate of the nearest intersecting point of each three-dimensional ray and the model grid is extracted, and a virtual scen