CN-120543916-B - AI auxiliary scoring method, system and equipment for sedative state of ICU patient
Abstract
The invention relates to the technical field of medical artificial intelligence, and provides an AI auxiliary scoring method, system and equipment for the sedative state of an ICU patient; calculating the head movement state of a patient, recording and storing the head shaking indication value per minute, calculating the eye state of the patient, recording and storing the eye opening indication value per minute, calculating the head shaking indication ratio and the eye opening indication ratio of the patient within one hour, constructing and training a multi-layer sensor neural network, and obtaining and outputting the sedation state score of the patient by using the head shaking indication ratio and the eye opening indication ratio as input characteristics. Compared with the traditional manual scoring system, the patient sedation scoring system provided by the invention has better performance, and particularly has the advantages of objectivity, real-time performance, automation and labor cost reduction.
Inventors
- ZHU LUN
- ZHENG FENG
- LIU YICHEN
Assignees
- 常州大学
- 常州市第一人民医院
- 常州祉晟数字科技有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20250513
Claims (7)
- 1. A method of AI-assisted scoring of a sedated state of an ICU patient, the method comprising: Step S01, acquiring a patient image through a camera, identifying a face area and an eye area of the patient in real time by using edge computing equipment, and uploading an identification result to a cloud server in real time; Step S02, calculating the head movement state of the patient according to the identified facial area, namely, a shaking state or a non-shaking state, and recording and storing shaking indication values in each minute; step S03, calculating the eye state of the patient according to the identified eye area, namely, the eye opening state or the eye closing state, and recording and storing the eye opening indication value per minute, wherein the specific steps are as follows: step S031, detecting the open and close states of eyes of a patient in real time based on a modified YOLOv eye state detection algorithm, wherein the modified YOLOv eye state detection algorithm constructs a double-branch attention enhancement module comprising a main branch and an auxiliary branch, and the auxiliary branch is designed for eye feature enhancement and a modified feature fusion strategy; step S032, counting the times of opening eyes of a patient in one minute; Step S033, comparing the eye opening times with a preset threshold value, when the eye opening times are larger than the preset threshold value, judging that the eyes are open, and recording the eye opening indication value as 1; step S04, calculating the head shaking indication ratio of the patient in one hour according to the recorded head shaking indication value, and calculating the eye opening indication ratio of the patient in one hour according to the recorded eye opening indication value; S05, constructing and training a multi-layer perceptron neural network, using the shaking head indication ratio and the eye opening indication ratio as input features, and adopting a back propagation algorithm to optimize network parameters so as to obtain and output a sedation state score of a patient; the auxiliary branch specifically comprises: The space pyramid pooling module is used for setting three pooling scales of {5×5, 9×9, 13×13}; reducing dimension by 1X 1 convolution after parallel pooling; The channel attention mechanism is used for obtaining a channel descriptor through global average pooling, learning channel weights of two layers of full-connection layers, namely FC1, C-C/r, FC2, C/r-C, normalizing the weights through a sigmoid function, and carrying out channel weighting on the feature map; the residual connection structure is used for carrying out short circuit connection on the original characteristic and the enhanced characteristic, adopting an addition fusion mode, and adjusting the channel number through 1 multiplied by 1 convolution; the improved feature fusion strategy specifically comprises the following steps: The self-adaptive feature fusion comprises the steps of calculating a main and auxiliary branch feature similarity matrix S, namely S [ i, j ] = F1[ i ], [ F2[ j ]/(|F1 [ i ] |×|F2[ j ] | ]), generating fusion weights alpha and beta based on the similarity, and carrying out weighted fusion, namely Fout = alpha multiplied by F1+beta multiplied by F2; The feature pyramid network structure comprises a top-down path, a bottom-up path and transverse connection; Specifically, the top-down path realizes the transfer from P5 to P4 and then to P3, adopts 2 times up sampling, and adopts a 1X 1 convolution adjustment channel; The top-up path realizes the transfer from P3 to P4 and then to P5, and adopts 3X 3 convolution with the step length of 2; transverse connection, element-by-element addition after feature map alignment, and 3×3 convolution fusion of features.
- 2. The method of AI-assisted scoring of an ICU patient 'S sedated state of claim 1 wherein the step of calculating a patient' S head movement state from the identified facial regions in step S02 is as follows: step S021, obtaining the difference dx between the maximum value and the minimum value of the X-axis coordinate of the central point of the face of the patient in one minute; step S022, obtaining the difference dy between the maximum value and the minimum value of the Y-axis coordinates of the central point of the face of the patient in one minute; step S023 calculating a maximum head movement distance within one minute, wherein ; Step S024, comparing the distance with a preset threshold, judging a shaking state when the distance is larger than the preset threshold, recording a shaking indication value of 1, judging a non-shaking state when the distance is smaller than or equal to the preset threshold, and recording a shaking indication value of 0.
- 3. The method of AI-assisted scoring of ICU patient sedation according to claim 1, where the specific steps of said modified YOLOv eye state detection algorithm in step S031 are: Shooting an eye image of a patient by using a camera, and marking the eye image by using software to obtain an eye state data set, wherein two types of labels are Open eye open_eyes and Closed eye closed_eyes respectively; an improved YOLOv network is constructed, a dual-branch attention enhancement module is designed, the dual-branch attention enhancement module comprises a main branch and an auxiliary branch, the main branch maintains the original YOLOv characteristic extraction network structure, and the auxiliary branch is designed for enhancing ocular characteristics and improving a characteristic fusion strategy.
- 4. The method for AI-assisted scoring of a sedated state of an ICU patient according to claim 1, wherein the step S04 of calculating the head shaking indicator ratio of the patient within one hour based on the recorded head shaking indicator values comprises counting all the head shaking indicator values within one hour, and dividing the accumulated head shaking indicator values within one hour by 60 to obtain the head shaking indicator ratio; The specific step of calculating the eye opening indication ratio of the patient within one hour according to the recorded eye opening indication values in the step S04 is to count all the eye opening indication values within one hour, accumulate all the eye opening indication values within one hour and divide the accumulated eye opening indication values by 60 to obtain the eye opening indication ratio.
- 5. A system for AI-assisted scoring of ICU patient sedation, wherein the system implements the method of any one of claims 1-4, comprising: The image recognition module is used for acquiring a patient image through a camera, recognizing the face area and the eye area of the patient in real time by utilizing edge computing equipment, and uploading the recognition result to the cloud server in real time; The head shaking indication value recording module is used for calculating the head movement state of the patient according to the identified facial area, namely a head shaking state or a non-head shaking state, and recording and storing the head shaking indication value of each minute; The eye opening indicating value recording module is used for calculating the eye state of the patient according to the identified eye area, namely the eye opening state or the eye closing state, and recording and storing the eye opening indicating value per minute, and comprises the following specific steps: step S031, detecting the open and close states of eyes of a patient in real time based on a modified YOLOv eye state detection algorithm, wherein the modified YOLOv eye state detection algorithm constructs a double-branch attention enhancement module comprising a main branch and an auxiliary branch, and the auxiliary branch is designed for eye feature enhancement and a modified feature fusion strategy; step S032, counting the times of opening eyes of a patient in one minute; Step S033, comparing the eye opening times with a preset threshold value, when the eye opening times are larger than the preset threshold value, judging that the eyes are open, and recording the eye opening indication value as 1; The indication ratio calculation module is used for calculating the head shaking indication ratio of the patient in one hour according to the recorded head shaking indication values and calculating the eye opening indication ratio of the patient in one hour according to the recorded eye opening indication values; the sedation state scoring module is used for constructing and training a multi-layer perceptron neural network, optimizing network parameters by adopting a back propagation algorithm by using the shaking head indication ratio and the eye opening indication ratio as input features, and obtaining and outputting the sedation state score of the patient; the auxiliary branch specifically comprises: The space pyramid pooling module is used for setting three pooling scales of {5×5, 9×9, 13×13}; reducing dimension by 1X 1 convolution after parallel pooling; The channel attention mechanism is used for obtaining a channel descriptor through global average pooling, learning channel weights of two layers of full-connection layers, namely FC1, C-C/r, FC2, C/r-C, normalizing the weights through a sigmoid function, and carrying out channel weighting on the feature map; the residual connection structure is used for carrying out short circuit connection on the original characteristic and the enhanced characteristic, adopting an addition fusion mode, and adjusting the channel number through 1 multiplied by 1 convolution; the improved feature fusion strategy specifically comprises the following steps: The self-adaptive feature fusion comprises the steps of calculating a main and auxiliary branch feature similarity matrix S, namely S [ i, j ] = F1[ i ], [ F2[ j ]/(|F1 [ i ] |×|F2[ j ] | ]), generating fusion weights alpha and beta based on the similarity, and carrying out weighted fusion, namely Fout = alpha multiplied by F1+beta multiplied by F2; The feature pyramid network structure comprises a top-down path, a bottom-up path and transverse connection; Specifically, the top-down path realizes the transfer from P5 to P4 and then to P3, adopts 2 times up sampling, and adopts a 1X 1 convolution adjustment channel; The top-up path realizes the transfer from P3 to P4 and then to P5, and adopts 3X 3 convolution with the step length of 2; transverse connection, element-by-element addition after feature map alignment, and 3×3 convolution fusion of features.
- 6. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the program, implements the method of any of claims 1-4.
- 7. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-4.
Description
AI auxiliary scoring method, system and equipment for sedative state of ICU patient Technical Field The embodiment of the invention relates to the technical field of medical artificial intelligence, in particular to an AI auxiliary scoring method, an AI auxiliary scoring system and AI auxiliary scoring equipment for an ICU patient sedative state. Background ICU patient sedation scores are of great significance for patient treatment and care. From the clinical point of view, accurate sedation grading can prevent unexpected events such as patient self-extubation, avoid excessive or insufficient sedation through accurate medication simultaneously, effectively prevent complications such as respiratory depression, delirium, shorten mechanical ventilation time. Reasonable sedation management can shorten patient ICU residence time, improve bed turnover rate, simultaneously through optimizing sedative drug use and reducing complication incidence, show reduction treatment cost, improve medical resource utilization efficiency. Thus, establishing an accurate, objective sedation scoring system is of great value to improve ICU treatment quality. Traditional ICU sedation scoring methods rely primarily on manual assessment by healthcare personnel via scoring tools such as Richmond agitation-sedation scale (RASS), which has a number of significant limitations. First, the scoring process requires a healthcare worker to wake up the patient with linguistic or physical stimuli, and such disruptive assessment may affect patient rest, even increase patient stress. Second, manual scoring is subjective and different evaluators may give different scores to the same patient, resulting in a lack of consistency in scoring results. Third, since healthcare workers are busy, they can usually only evaluate at fixed time points (e.g., once every 4-8 hours), and cannot realize continuous monitoring, and thus can easily miss dynamic changes in patient status. In addition, night assessment increases the workload of healthcare workers and may be affected by factors such as fatigue, which may reduce assessment accuracy. Finally, the traditional scoring method lacks objective data support, and the scoring record needs manual registration, which is unfavorable for data collection, analysis and long-term tracking. These limitations can affect the accuracy and timeliness of sedation management, and thus the effectiveness of the treatment. Disclosure of Invention In order to solve the problems, the invention combines the deep learning technology with the medical monitoring, adopts non-contact continuous video monitoring, and analyzes the facial features and the motion state of the patient in real time through the edge computing equipment so as to realize the automatic and continuous assessment of the sedation degree of the ICU patient. In accordance with embodiments of the present invention, a method, system, and apparatus for AI-assisted scoring of ICU patient sedation status are provided. In a first aspect of the invention, a method of AI-assisted scoring of a sedated state of an ICU patient is provided. The method comprises the following steps: Step S01, acquiring a patient image through a camera, identifying a face area and an eye area of the patient in real time by using edge computing equipment, and uploading an identification result to a cloud server in real time; Step S02, calculating the head movement state of the patient according to the identified facial area, namely, a shaking state or a non-shaking state, and recording and storing shaking indication values in each minute; Step S03, calculating the eye state of the patient according to the identified eye area, namely, the eye opening state or the eye closing state, and recording and storing the eye opening indication value in each minute; step S04, calculating the head shaking indication ratio of the patient in one hour according to the recorded head shaking indication value, and calculating the eye opening indication ratio of the patient in one hour according to the recorded eye opening indication value; And S05, constructing and training a multi-layer perceptron neural network, using the head shaking indication ratio and the eye opening indication ratio as input features, and adopting a back propagation algorithm to optimize network parameters so as to obtain and output the sedation state score of the patient. Further, the specific steps of calculating the head movement state of the patient according to the identified facial area in step S02 are as follows: step S021, obtaining the difference dx between the maximum value and the minimum value of the X-axis coordinate of the central point of the face of the patient in one minute; step S022, obtaining the difference dy between the maximum value and the minimum value of the Y-axis coordinates of the central point of the face of the patient in one minute; step S023 calculating a maximum head movement distance within one minute, wherein Ste