Search

CN-121983292-A - Man-machine collaborative inquiry visual interaction system and method based on medical large model

CN121983292ACN 121983292 ACN121983292 ACN 121983292ACN-121983292-A

Abstract

The invention discloses a man-machine collaborative inquiry visual interaction system and method based on a medical large model, wherein the system comprises a real-time knowledge base module, a data fusion module, a large model reasoning module and a visual interaction module, wherein the real-time knowledge base module is used for acquiring medical document data in real time and dynamically updating basic medical knowledge maps, the data fusion module is used for acquiring multi-source medical data of a target patient to construct personalized knowledge maps, the large model reasoning module is used for receiving the personalized knowledge maps to carry out preliminary diagnosis reasoning and carrying out re-reasoning after responding to an intervention instruction to carry out adjustment to generate a final diagnosis conclusion, and the visual interaction module is used for visually displaying diagnosis information and receiving an intervention operation instruction input by a user.

Inventors

  • LONG JUN
  • LUO XIWEN
  • SU JUAN
  • ZHAO SHUANG

Assignees

  • 中南大学

Dates

Publication Date
20260505
Application Date
20260407

Claims (10)

  1. 1. A man-machine collaborative inquiry visual interaction system based on a medical large model is characterized by comprising a real-time knowledge base module, a data fusion module, a large model reasoning module and a visual interaction module; the real-time knowledge base module is used for acquiring medical knowledge data in real time and dynamically updating a pre-constructed basic medical knowledge graph; The data fusion module is connected with the real-time knowledge base module and is used for acquiring multi-source medical data of a target patient, preprocessing the medical data and constructing a personalized knowledge graph of the target patient based on the preprocessed medical data and a basic medical knowledge graph; The large model reasoning module is internally provided with a pre-trained medical large model, is respectively connected with the data fusion module and the real-time knowledge base module and is used for carrying out preliminary diagnosis reasoning on the personalized knowledge graph according to the medical large model so as to obtain preliminary diagnosis information; The preliminary diagnosis information comprises a preliminary diagnosis result, an inference logic chain of the preliminary diagnosis result and a confidence level of the preliminary diagnosis result; The visual interaction module is connected with the large model reasoning module and is used for visually displaying the preliminary diagnosis information and receiving an intervention operation instruction input by a user so as to adjust the personalized knowledge graph; The large model reasoning module is also used for inputting the intervention operation instruction and the adjusted personalized knowledge graph into the medical large model for re-reasoning so as to generate a final diagnosis conclusion, and the visual interaction module is also used for visually displaying the final diagnosis conclusion.
  2. 2. The system of claim 1, further comprising a feedback optimization module, coupled to the large model inference module and the real-time knowledge base module, respectively, for encoding the user intervention instructions into reward signals in a reinforcement learning algorithm and adjusting parameters of the medical large model with the reward signals to optimize diagnostic performance of the medical large model.
  3. 3. The system of claim 1, wherein the real-time knowledge base module comprises a knowledge base unit and a knowledge update unit; The knowledge base unit is connected with the data fusion module and is used for storing the personalized knowledge graph of the target patient; The knowledge updating unit is connected with the knowledge base unit and is used for dynamically updating the basic medical knowledge graph and the personalized knowledge graph in the knowledge base unit; The knowledge updating unit is configured to perform the steps of: monitoring the appointed medical knowledge data distribution channel in real time to acquire updated medical knowledge data; carrying out consistency check on the updated medical knowledge data and the basic medical knowledge graph; Fusing the medical knowledge data which passes the verification into a basic medical knowledge graph in the knowledge base unit by an incremental updating mechanism; the medical knowledge data includes medical literature data, clinical guideline data, and pharmacopoeia data.
  4. 4. The system of claim 1, wherein the data fusion module comprises: The data acquisition unit is used for acquiring multi-source medical data of a target patient; The multi-source medical data comprises electronic medical record data, physiological monitoring data, medical document retrieval data and medical image data; the data preprocessing unit is connected with the data acquisition unit and is used for preprocessing the medical data to obtain preprocessed medical data; The data preprocessing comprises data cleaning, data denoising, data complement, data desensitization and standardized processing; the feature extraction unit is connected with the data preprocessing unit and is used for extracting features of the preprocessed medical data so as to obtain medical data features; The medical data features include text features, physiological timing features, and image features; The standardized processing unit is connected with the feature extraction unit and is used for carrying out standardized processing on the medical data features; the map construction module is respectively connected with the real-time knowledge base module and the standardized processing unit and is used for carrying out knowledge fusion on the medical data characteristics after the standardized processing and the basic medical knowledge map so as to obtain a personalized knowledge map of the target patient; The personalized knowledge graph comprises symptom nodes, inspection result nodes, medicine nodes, disease nodes and edges representing logical relations among the nodes.
  5. 5. The system of claim 1, wherein the large model reasoning module comprises: the reasoning execution unit is used for carrying out preliminary diagnosis reasoning on symptom nodes and inspection result nodes in the personalized knowledge graph according to the pre-trained medical large model so as to obtain a preliminary diagnosis result, and is also used for inputting the adjusted personalized knowledge graph and prompt words corresponding to the intervention operation instruction into the medical large model together for re-reasoning so as to generate a final diagnosis conclusion; An interpretable component unit connected with the reasoning execution unit and used for extracting the attention weight and the significance reason of the input characteristic in the medical large model so as to generate a reasoning logic chain of the primary diagnosis result; And the uncertainty quantization unit is connected with the reasoning execution unit and is used for quantitatively evaluating the uncertainty of the preliminary diagnosis result according to a preset Monte Carlo random inactivation algorithm so as to generate the confidence coefficient of the preliminary diagnosis result.
  6. 6. The system according to claim 5, wherein the preliminary diagnosis reasoning is performed on symptom nodes and inspection result nodes in the personalized knowledge-graph according to the pre-trained medical large model to obtain a preliminary diagnosis result, and specifically comprises the following steps: a1, carrying out initial analysis based on symptom nodes and check result nodes in the personalized knowledge graph to generate a preliminary diagnosis hypothesis; A2, carrying out multidimensional parallel verification on the preliminary diagnosis hypothesis based on a plurality of preset reasoning branches to calculate a conditional probability score of the preliminary diagnosis hypothesis under the current reasoning branches; The reasoning branches comprise a laboratory examination analysis branch, an imaging characteristic analysis branch and a medical history correlation analysis branch; a3, carrying out weighted fusion on all the conditional probability scores to obtain the comprehensive confidence coefficient of the preliminary diagnosis hypothesis; A4, calculating the difference between the conditional probability score of each reasoning branch and the comprehensive confidence coefficient, and comparing the difference with a preset threshold value to perform conflict detection: if the difference between the conditional probability score of any inference branch and the comprehensive confidence coefficient exceeds the preset threshold, judging that the inference branch has conflict, triggering a re-assessment mechanism, and executing the step A5; If the difference between the conditional probability scores of all the reasoning branches and the comprehensive confidence coefficient does not exceed the preset threshold value, judging that no conflict exists, maintaining the preliminary diagnosis hypothesis, and taking the preliminary diagnosis hypothesis as a preliminary diagnosis result and outputting the preliminary diagnosis hypothesis; A5, responding to the re-assessment mechanism, reducing the confidence coefficient of the preliminary diagnosis hypothesis, extracting feature nodes causing conflict, and generating a substitute diagnosis hypothesis by combining the feature nodes; A6, integrating the preliminary diagnosis hypothesis with the alternative diagnosis hypothesis to obtain a preliminary diagnosis result and outputting the preliminary diagnosis result.
  7. 7. The system of claim 1, wherein the visual interaction module comprises: The reasoning path display unit is used for dynamically displaying the reasoning logic chain in a graphical path; A user intervention interface unit for receiving an intervention operation instruction for adjusting the preliminary diagnosis information by a user; And the decision comparison unit is used for displaying the final diagnosis conclusion and the reference diagnosis conclusion stored in the real-time knowledge base module in parallel.
  8. 8. The system of claim 1,2, 6, or 7, wherein the intervention operation instruction comprises at least one of: a correction operation instruction for modifying the preliminary diagnosis hypothesis or modifying the confidence level of the preliminary diagnosis hypothesis; The supplementary operation instruction is used for supplementing new evidence nodes into the reasoning logic chain of the preliminary diagnosis hypothesis; and deleting the operation instruction, which is used for deleting the existing nodes and/or the connection relations among the nodes in the reasoning logic chain of the preliminary diagnosis hypothesis.
  9. 9. A man-machine collaborative inquiry visual interaction method based on a medical large model is characterized by comprising the following steps of: acquiring multi-source medical data of a target patient, preprocessing the medical data, and constructing a personalized knowledge graph of the target patient based on the preprocessed medical data and a pre-constructed basic medical knowledge graph; performing preliminary diagnosis reasoning on the personalized knowledge graph based on the pre-trained medical large model to obtain preliminary diagnosis information; The preliminary diagnosis information comprises a preliminary diagnosis result, an inference logic chain of the preliminary diagnosis result and a confidence level of the preliminary diagnosis result; Displaying the preliminary diagnosis information to a user in a visual form, and receiving an intervention operation instruction input by the user based on the visual display; Based on the intervention operation instruction, the personalized knowledge graph is adjusted, and the intervention operation instruction and the adjusted personalized knowledge graph are input into the medical large model to generate a final diagnosis conclusion.
  10. 10. The method according to claim 9, wherein the method further comprises: encoding the intervention operation instruction of the user into a reward signal in a reinforcement learning algorithm, and adjusting parameters of the medical large model by utilizing the reward signal so as to optimize the diagnosis performance of the medical large model.

Description

Man-machine collaborative inquiry visual interaction system and method based on medical large model Technical Field The invention relates to the technical field of artificial intelligence and intelligent medical treatment, in particular to a man-machine collaborative inquiry visual interaction system and method based on a medical large model. Background Along with the explosive growth of medical data scale and the rapid development of artificial intelligence technology, an auxiliary diagnosis system becomes an important research direction in the intelligent medical field, the contradiction between clinical medical resource supply and demand is increasingly outstanding at present, traditional clinical decisions are excessively dependent on personal experiences of doctors, are easily influenced by subjective factors, complex case diagnosis efficiency is low, and the increasing medical service quality requirement is difficult to meet, so that how to use the artificial intelligence technology to assist doctors in carrying out accurate and efficient clinical diagnosis decisions becomes a technical problem to be solved urgently in the field. In the existing auxiliary diagnosis system, the traditional medical question-answering system is mainly based on a rule engine or a simple retrieval matching mechanism, although logic is clear, semantic understanding and complex reasoning capability are limited, and actual cases with various clinical manifestations are difficult to deal with, the deep learning model-based system is usually not provided with interpretability in a diagnosis decision process, and a 'black box' problem is commonly existed, so that a clinician is difficult to understand and trust an output result of the system, in addition, the man-machine interaction mode of the existing system is mainly simple one-way question-answering, a real-time visual presentation and interaction mechanism for the diagnosis reasoning process is lacking, a doctor cannot intervene, verify or guide in the reasoning process, and effective coordination among people and machines is difficult to realize. In terms of data processing, medical data has the characteristic of multiple sources and isomerism, including electronic medical records, medical images, physiological monitoring signals, medical documents and the like, the existing system is generally difficult to deeply fuse and uniformly characterize the isomerism data, and the phenomenon of information island is prominent, so that diagnosis is based on one-sided and updated hysteresis, and the comprehensiveness and timeliness of diagnosis are affected. Aiming at the problems, a plurality of solutions with higher integration level exist in the prior art, for example, a Chinese patent application with publication number of CN120452837A discloses an intelligent decision system for clinical diagnosis decision analysis, which comprises modules of data acquisition, knowledge graph, multi-mode analysis and the like, and aims to integrate data and provide decision support, however, the solution and the similar technologies still have the defects that firstly, the reasoning logic of the solution is opaque to users and can not present an internal reasoning chain in an interactive visual mode, secondly, the human-computer interaction is limited to confirm or correct a final result and can not support doctors to perform real-time intervention and dynamic guidance in the system reasoning process, thirdly, the coupling degree of real-time updated medical knowledge and the real-time diagnosis reasoning process is insufficient, and the self-adaptive evolution capability of the system is limited. Disclosure of Invention In view of the shortcomings of the prior art, the invention provides a man-machine collaborative inquiry visual interaction system and method based on a medical large model, which are used for solving the problems of opaque reasoning process, insufficient multi-source data fusion, unidirectional man-machine interaction and lagged knowledge updating in the prior art. In order to achieve the above purpose, the present invention adopts the following technical scheme: In a first aspect, the invention provides a man-machine collaborative inquiry visual interaction system based on a medical large model, which comprises a real-time knowledge base module, a data fusion module, a large model reasoning module and a visual interaction module; the system comprises a real-time knowledge base module, a data fusion module, a large model reasoning module and a visual interaction module; the real-time knowledge base module is used for acquiring medical knowledge data in real time and dynamically updating a pre-constructed basic medical knowledge graph; The data fusion module is connected with the real-time knowledge base module and is used for acquiring multi-source medical data of a target patient, preprocessing the medical data and constructing a personalized knowledge graph of the target pat