Search

CN-121281782-B - Wound care guidance system with remote synchronous video recognition function

CN121281782BCN 121281782 BCN121281782 BCN 121281782BCN-121281782-B

Abstract

The application belongs to the technical field of wound care, and discloses a remote synchronous video identification wound care guidance system which comprises a video acquisition module, an AR measurement module, a two-channel calculation module, a data fusion module and a specified suggestion generation module, wherein the video acquisition module is used for performing video data acquisition, the AR measurement module can dynamically superimpose a preset virtual scale template on a corrected video image frame, and the two-channel calculation module is used for calculating the wound necrosis fading rate and the granulation growth slope through a parallel processing mechanism by dynamic indexes so as to quantify the change trend, the sneak depth and flaccid pipe nodes of a wound. According to the AR measuring module, the virtual scale is superimposed in an augmented reality mode, the length, width, area, volume, diving, sinus and flaccid tube parameters of a wound are accurately marked in a real-time video stream, meanwhile, the wound necrosis fading rate and the granulation growth slope are calculated through the double-channel calculating module respectively, the wound change trend, diving depth and flaccid tube nodes are quantized, and the rate evolution track of the wound of a patient is monitored conveniently.

Inventors

  • ZHANG XIN
  • YUE CHUNHE

Assignees

  • 首都医科大学附属北京同仁医院

Dates

Publication Date
20260508
Application Date
20250924

Claims (10)

  1. 1. The wound care guidance system is characterized by comprising a video acquisition module, an AR measurement module, a two-channel calculation module, a data fusion module and a specified suggestion generation module; The video acquisition module is in butt joint with the mobile terminal of the patient and is used for carrying out video data acquisition work and realizing the synchronization of different video data in the time dimension through a built-in calibration unit; the AR measurement module can dynamically superimpose a preset virtual scale template in the corrected video image frame, and the patient automatically marks key parameters after selecting anatomical mark points, wherein the key parameters comprise wound length, width, area and sinus structure; The double-channel calculation module is used for calculating the wound necrosis fading rate and the granulation growth slope through dynamic indexes and quantifying the wound change trend, the submergence depth and the flaccid tube node through a parallel processing mechanism; The calculation formula of the wound necrosis resolution rate R necrosis is as follows: Wherein A necrosis (t) is the necrosis area at the current moment, and A necrosis (t-7) is the necrosis area before one week; The calculation formula of the granulation growth slope K granulation is as follows: wherein x i is the i-th red duty cycle, Is the average value of the red duty ratio, t i is the ith measurement time, N is the detection times for the average measurement time; the data fusion module can project the eigenvectors of the wound necrosis fading rate and the granulation growth slope to a unified semantic space through a deep learning model to obtain a wound risk value; the specified suggestion generation module can quickly search and match a corresponding-level care scheme framework in a preset care strategy library according to the size of the wound risk value.
  2. 2. The remote synchronous video-identified wound care guidance system of claim 1, wherein the environment sensing unit is configured to continuously monitor scene illuminance distribution and independently process highlight overflow area and shadow detail area by partition histogram equalization technology to distinguish skin color from wound; The physiological parameter acquisition unit is connected with the ABI detector and the glucometer through interfaces and respectively acquires and measures the hemodynamic index of the lower limb and the blood glucose concentration information of the patient.
  3. 3. The remote synchronized video-identified wound care guidance system of claim 1, wherein said AR measurement module comprises a deformation elimination unit, an overlay unit, and a labeling unit.
  4. 4. A remote synchronous video-identified wound care guidance system according to claim 3, wherein the deformation elimination unit is configured to eliminate lens distortion by a matrix compensation algorithm, establish a relationship between an actual imaging point and a pixel coordinate system by using a standard marker, calculate correction coordinates, and perform sinus structure topology modeling; the virtual scale template of the superposition unit identifies plane feature points of the video image frame based on a computer vision algorithm, and a homography transformation matrix from a coordinate system to a screen pixel coordinate system is constructed; the marking unit can automatically mark the length, width and area information of the wound according to the mark points selected by the user.
  5. 5. A remotely synchronized video-identified wound care instruction system according to claim 1, wherein said calibration unit utilizes preset spatial markers to achieve synchronization in spatial dimensions.
  6. 6. The remote synchronous video-identified wound care guidance system of claim 4, wherein the correction coordinates are calculated based on Harris corner detection algorithm as follows: x corr =x raw (1+k 1 r 2 +k 2 r 4 y corr =y raw (1+k 1 r 2 +k 2 r 4 ) Wherein, k 1 and k2 are distortion coefficients, r is a dominant point coordinate defect value, and (x raw ,y raw ) is a vertex coordinate.
  7. 7. The remotely synchronized video identified wound care guidance system of claim 1, wherein the deep learning model is constructed based on an encoder-decoder architecture, the encoder receiving as input two eigenvectors, a wound necrosis regression rate and a granulation growth slope, the encoder being internally composed of multiple neural network layers.
  8. 8. The remotely synchronized video-identified wound care guidance system of claim 1, wherein said care plan framework is presented in a structured form comprising staged guidelines of operation, consumable consumption prediction and pre-alarm threshold settings.
  9. 9. The remote synchronized video-identified wound care guidance system of claim 1, wherein said dual channel computing module is operative to calculate a rate of wound necrosis resolution, calculate a slope of granulation growth, calculate a daily rate of change of wound volume, a depth of penetration analysis, and a flaccid tube hemodynamic assessment.
  10. 10. The wound care guidance system based on remote synchronous video recognition of claim 9, wherein the sneak depth analysis comprises the following specific contents that a two-channel calculation module introduces a gradient vector field algorithm to track the tissue infiltration direction, a path cost map from epidermis to deep tissue is constructed by analyzing pixel displacement fields between continuous frames, and a lowest resistance diffusion channel is recognized as a sneak trunk; The method comprises the specific contents of the flaccid tube hemodynamic assessment that a dual-channel calculation module also integrates pulse waveform data and video blood flow imaging technology of an ABI detector, extracts a microcirculation perfusion density distribution map by adopting an optical coherence tomography principle, calculates a blood flow peak value and a pulsation index of a fistula inlet by combining a frequency domain analysis method, identifies an abnormal shunt mode by a convolutional neural network, and automatically marks a high-flow fistula node.

Description

Wound care guidance system with remote synchronous video recognition function Technical Field The application belongs to the technical field of video identification, and particularly relates to a wound care guidance system for remote synchronous video identification. Background With the continued advancement of medical technology and the increasing trend of aging population, wound care has become an integral part of clinical medicine and home care. However, the traditional wound nursing mode faces a plurality of challenges due to the phenomena of uneven distribution of medical resources, inconvenience in medical treatment of patients, uneven nursing quality and the like. Particularly for chronic wounds (such as venous ulcers of lower limbs, diabetic feet and the like), the treatment period is long, the complications are more, and continuous professional guidance and monitoring are required. Thus, there is a need for a wound care instruction system. However, most of the current wound care guidance systems only have basic video consultation functions, are easy to cause distortion of wound size measurement during remote identification, so that the amount of nursing medicine is not accurate enough to influence the recovery effect of a patient, and meanwhile lack dynamic analysis and identification on multiple aspects of wounds, so that the dosage of nursing dressing is inconvenient to dynamically adjust. Disclosure of Invention The application provides a remote synchronous video identification wound care guidance system, which aims to solve the problems that the prior art only has a basic video consultation function, is easy to cause measurement distortion of wound size during remote identification, causes insufficient accurate nursing medicine dosage, influences the recovery effect of a patient and simultaneously lacks dynamic analysis and identification on multiple aspects of wounds. A wound care guidance system with remote synchronous video recognition comprises a video acquisition module, an AR measurement module, a two-channel calculation module, a data fusion module and a specified suggestion generation module; The video acquisition module is in butt joint with the mobile terminal of the patient and is used for carrying out video data acquisition work and realizing the synchronization of different video data in the time dimension through a built-in calibration unit; the AR measurement module can dynamically superimpose a preset virtual scale template in the corrected video image frame, and the patient automatically marks key parameters after selecting anatomical mark points, wherein the key parameters comprise wound length, width, area and sinus structure; The double-channel calculation module is used for calculating the wound necrosis fading rate and the granulation growth slope through dynamic indexes and quantifying the wound change trend, the submergence depth and the flaccid tube node through a parallel processing mechanism; The calculation formula of the wound necrosis resolution rate R necrosis is as follows: Wherein A necrosis (t) is the necrosis area at the current moment, and A necrosis (t-7) is the necrosis area before one week; The calculation formula of the granulation growth slope K granulation is as follows: wherein x i is the i-th red duty cycle, Is the average value of the red duty ratio, t i is the ith measurement time,N is the detection times for the average measurement time; the data fusion module can project the eigenvectors of the wound necrosis fading rate and the granulation growth slope to a unified semantic space through a deep learning model to obtain a wound risk value; the specified suggestion generation module can quickly search and match a corresponding-level care scheme framework in a preset care strategy library according to the size of the wound risk value. Further, the environment sensing unit is used for continuously monitoring the illuminance distribution of a scene, and independently processing a highlight overflow area and a shadow detail area through a partition histogram equalization technology to distinguish skin colors and wounds; The physiological parameter acquisition unit is connected with the ABI detector and the glucometer through interfaces and respectively acquires and measures the hemodynamic index of the lower limb and the blood glucose concentration information of the patient. Further, the AR measurement module comprises a deformation eliminating unit, a superposition unit and a labeling unit. Further, the deformation eliminating unit is used for eliminating lens distortion through a matrix compensation algorithm, establishing a relation between an actual imaging point and a pixel coordinate system by using a standard marker, calculating a correction coordinate, and simultaneously carrying out sinus structure topology modeling; the virtual scale template of the superposition unit identifies plane feature points of the video image frame based on a comput