Search

CN-122000073-A - Knee joint prosthesis evaluation method and system

CN122000073ACN 122000073 ACN122000073 ACN 122000073ACN-122000073-A

Abstract

The invention discloses a knee joint prosthesis evaluation method and system, the method comprises the steps of collecting multi-modal data according to biomechanical performance test data, medical image data and postoperative motion capture data of a patient of the knee joint prosthesis, carrying out data fusion by adopting a multi-modal alignment network based on deep learning to obtain a fused multi-modal data set, inputting the multi-modal data set into a self-adaptive convolutional neural network, extracting mechanical characteristics of the knee joint prosthesis under dynamic load to obtain a dynamic mechanical characteristic matrix, inputting the dynamic mechanical characteristic matrix into a performance evaluation model based on a multi-scale attention mechanism, carrying out quantitative analysis on the performance of the prosthesis, and generating a performance evaluation report. By utilizing the embodiment of the invention, the comprehensive quantitative analysis of the performance of the prosthesis can be realized, and the scientificity and the accuracy of the design of the prosthesis are improved.

Inventors

  • ZHAO LIANG
  • CHEN ZHENHUA
  • WANG ZIQI
  • Liao Zheting
  • CHEN YUFAN
  • WU DESHENG
  • Qiu Gengtao

Assignees

  • 广州医科大学附属第一医院(广州呼吸中心)

Dates

Publication Date
20260508
Application Date
20260122

Claims (10)

  1. 1. A method of evaluating a knee prosthesis, the method comprising: According to biomechanical performance test data, medical image data and postoperative motion capture data of a patient of the knee joint prosthesis, multi-modal data acquisition is carried out, data fusion is carried out by adopting a multi-modal alignment network based on deep learning, and data from different sources are mapped into a unified space-time coordinate system to obtain a fused multi-modal data set; Inputting the multi-modal data set into a self-adaptive convolution neural network, and extracting mechanical characteristics of the knee joint prosthesis under dynamic load, wherein the self-adaptive convolution neural network captures stress distribution and strain change of the prosthesis in different movement stages through time convolution to obtain a dynamic mechanical characteristic matrix; inputting the dynamic mechanical feature matrix into a performance evaluation model based on a multi-scale attention mechanism, and performing quantitative analysis on the performance of the prosthesis, wherein the performance evaluation model evaluates the stability, wear resistance and biocompatibility of the prosthesis by combining local features and global features, and generates a performance evaluation report.
  2. 2. The method according to claim 1, wherein the steps of performing multi-modal data acquisition according to biomechanical performance test data, medical image data and post-operation motion capture data of the knee joint prosthesis, performing data fusion by using a multi-modal alignment network based on deep learning, mapping data from different sources into a unified space-time coordinate system, and obtaining a fused multi-modal dataset, include: according to biomechanical performance test data, medical image data and postoperative motion capture data of a knee joint prosthesis, a data acquisition frame based on edge calculation is adopted to acquire multi-mode data in real time, and the real-time performance and the continuity of data acquisition are ensured through a lightweight data caching mechanism; For multi-mode data, adopting a format recognition model based on deep learning to automatically recognize format types of different data sources, and carrying out noise filtering and missing value filling on the data through a self-adaptive data cleaning algorithm to generate a preliminary standardized data set; For the preliminary standardized data set, mapping data from different sources into a unified space-time coordinate system by adopting a multi-modal alignment network based on deep learning, and eliminating time and space differences among the data by a time stamp alignment algorithm and a space registration technology to generate a preliminary aligned multi-modal data set; and carrying out weighted fusion on the characteristics of biomechanical property data, medical image data and motion capture data by adopting a characteristic fusion method based on a multi-head attention mechanism on the preliminarily aligned multi-mode data set, capturing the association relationship among different data sources through a cross-mode attention mechanism, and generating a fused multi-mode data set.
  3. 3. The method of claim 2, wherein the inputting the multi-modal dataset into an adaptive convolutional neural network extracts mechanical features of the knee prosthesis under dynamic loading, wherein the adaptive convolutional neural network captures stress distribution and strain changes of the prosthesis at different motion phases by time convolution, and obtains a dynamic mechanical feature matrix, comprising: For the multi-mode data set, adopting a characteristic extraction method based on a self-adaptive convolutional neural network to respectively extract stress-strain characteristics in biomechanical performance data, structural characteristics in medical image data and motion track characteristics in motion capture data, capturing mechanical characteristics of different scales through a multi-scale convolution kernel, and generating a preliminary characteristic representation; the method comprises the steps of capturing time sequence characteristics of stress distribution and strain change by combining a dynamic modeling method based on a time convolution module and load change of a prosthesis in different motion stages, extracting short-term fluctuation and long-term trend characteristics through a sliding window mechanism, and generating dynamic mechanical characteristic representation; The dynamic mechanical feature representation is carried out by adopting a feature fusion method based on a multi-head attention mechanism, weighting and fusing stress distribution features, strain change features and motion track features, capturing the association relationship among different features through a cross-modal attention mechanism, and generating a fused dynamic mechanical feature matrix; And (3) dynamically adjusting the feature weight by adopting a Bayesian optimization-based calibration method and combining the material characteristics and actual use scenes of the prosthesis to the fused dynamic mechanical feature matrix, and generating a final dynamic mechanical feature matrix by preventing overfitting through regularization constraint.
  4. 4. A method according to claim 3, wherein said inputting the dynamic mechanical feature matrix into a performance assessment model based on a multi-scale attention mechanism for quantitative analysis of the performance of the prosthesis, wherein said performance assessment model assesses the stability, wear resistance and biocompatibility of the prosthesis by combining local and global features, generating a performance assessment report comprising: Extracting local features of the prosthesis under the microscale by adopting a feature extraction method based on a multiscale convolutional neural network for the dynamic mechanical feature matrix, capturing local mechanical features of different scales by a self-adaptive convolution kernel, and generating local feature representation; for the dynamic mechanical feature matrix, a global feature extraction method based on a graph convolution network is adopted, the overall mechanical behavior of the prosthesis is abstracted into a graph structure, global features are extracted, and the global mechanical features are captured through multi-layer graph convolution operation to generate global feature representation; The method comprises the steps of carrying out weighted fusion on local features and global features by adopting a feature fusion method based on a multi-scale attention mechanism on the local feature representation and the global feature representation, capturing the association relationship between the local features and the global features through a cross-scale attention mechanism, and generating a fused multi-scale feature representation; and carrying out quantitative analysis on the fused multi-scale characteristic representation by adopting a performance evaluation model based on a deep neural network and combining the stability, wear resistance and biocompatibility indexes of the prosthesis, and generating a performance evaluation report containing a quantitative result and an evaluation basis through an interpretability module in the performance evaluation model.
  5. 5. A knee prosthesis evaluation system, the system comprising: the acquisition module is used for acquiring multi-modal data according to biomechanical performance test data, medical image data and postoperative motion capture data of a patient of the knee joint prosthesis, carrying out data fusion by adopting a multi-modal alignment network based on deep learning, and mapping data from different sources into a unified space-time coordinate system to obtain a fused multi-modal data set; The extraction module is used for inputting the multi-mode data set into the self-adaptive convolution neural network to extract the mechanical characteristics of the knee joint prosthesis under dynamic load, wherein the self-adaptive convolution neural network captures the stress distribution and the strain change of the prosthesis in different movement stages through time convolution to obtain a dynamic mechanical characteristic matrix; The analysis module is used for inputting the dynamic mechanical feature matrix into a performance evaluation model based on a multi-scale attention mechanism to perform quantitative analysis on the performance of the prosthesis, wherein the performance evaluation model evaluates the stability, the wear resistance and the biocompatibility of the prosthesis by combining local features and global features, and generates a performance evaluation report.
  6. 6. The system according to claim 5, wherein the acquisition module is specifically configured to: according to biomechanical performance test data, medical image data and postoperative motion capture data of a knee joint prosthesis, a data acquisition frame based on edge calculation is adopted to acquire multi-mode data in real time, and the real-time performance and the continuity of data acquisition are ensured through a lightweight data caching mechanism; For multi-mode data, adopting a format recognition model based on deep learning to automatically recognize format types of different data sources, and carrying out noise filtering and missing value filling on the data through a self-adaptive data cleaning algorithm to generate a preliminary standardized data set; For the preliminary standardized data set, mapping data from different sources into a unified space-time coordinate system by adopting a multi-modal alignment network based on deep learning, and eliminating time and space differences among the data by a time stamp alignment algorithm and a space registration technology to generate a preliminary aligned multi-modal data set; and carrying out weighted fusion on the characteristics of biomechanical property data, medical image data and motion capture data by adopting a characteristic fusion method based on a multi-head attention mechanism on the preliminarily aligned multi-mode data set, capturing the association relationship among different data sources through a cross-mode attention mechanism, and generating a fused multi-mode data set.
  7. 7. The system according to claim 6, wherein the extraction module is specifically configured to: For the multi-mode data set, adopting a characteristic extraction method based on a self-adaptive convolutional neural network to respectively extract stress-strain characteristics in biomechanical performance data, structural characteristics in medical image data and motion track characteristics in motion capture data, capturing mechanical characteristics of different scales through a multi-scale convolution kernel, and generating a preliminary characteristic representation; the method comprises the steps of capturing time sequence characteristics of stress distribution and strain change by combining a dynamic modeling method based on a time convolution module and load change of a prosthesis in different motion stages, extracting short-term fluctuation and long-term trend characteristics through a sliding window mechanism, and generating dynamic mechanical characteristic representation; The dynamic mechanical feature representation is carried out by adopting a feature fusion method based on a multi-head attention mechanism, weighting and fusing stress distribution features, strain change features and motion track features, capturing the association relationship among different features through a cross-modal attention mechanism, and generating a fused dynamic mechanical feature matrix; And (3) dynamically adjusting the feature weight by adopting a Bayesian optimization-based calibration method and combining the material characteristics and actual use scenes of the prosthesis to the fused dynamic mechanical feature matrix, and generating a final dynamic mechanical feature matrix by preventing overfitting through regularization constraint.
  8. 8. The system according to claim 7, wherein the analysis module is specifically configured to: Extracting local features of the prosthesis under the microscale by adopting a feature extraction method based on a multiscale convolutional neural network for the dynamic mechanical feature matrix, capturing local mechanical features of different scales by a self-adaptive convolution kernel, and generating local feature representation; for the dynamic mechanical feature matrix, a global feature extraction method based on a graph convolution network is adopted, the overall mechanical behavior of the prosthesis is abstracted into a graph structure, global features are extracted, and the global mechanical features are captured through multi-layer graph convolution operation to generate global feature representation; The method comprises the steps of carrying out weighted fusion on local features and global features by adopting a feature fusion method based on a multi-scale attention mechanism on the local feature representation and the global feature representation, capturing the association relationship between the local features and the global features through a cross-scale attention mechanism, and generating a fused multi-scale feature representation; and carrying out quantitative analysis on the fused multi-scale characteristic representation by adopting a performance evaluation model based on a deep neural network and combining the stability, wear resistance and biocompatibility indexes of the prosthesis, and generating a performance evaluation report containing a quantitative result and an evaluation basis through an interpretability module in the performance evaluation model.
  9. 9. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1-4 when run.
  10. 10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1-4.

Description

Knee joint prosthesis evaluation method and system Technical Field The invention belongs to the technical field of joint prostheses, and particularly relates to a knee joint prosthesis evaluation method and system. Background Knee prosthesis replacement surgery has become an effective means of treating severe knee joint disorders (e.g., osteoarthritis, rheumatoid arthritis, etc.), with the aim of restoring motor function and quality of life to the patient. However, with the continuous progress of prosthetic materials and design techniques, how to evaluate knee joint prostheses comprehensively and scientifically to ensure the superiority and safety of the knee joint prostheses has become an important subject in current medical research and clinical application. The existing knee joint prosthesis evaluation method often depends on a single evaluation index, usually focuses on biomechanical performance test and clinical postoperative results of the prosthesis, and lacks comprehensive analysis of data from different sources. This approach has difficulty in fully reflecting the performance of the prosthesis during actual use, particularly the stress distribution and the state of motion under dynamic loading. In addition, efficient integration of medical image data with post-operative motion capture data is also a significant challenge in the current art. Disclosure of Invention The invention aims to provide a knee joint prosthesis evaluation method and a knee joint prosthesis evaluation system, which are used for solving the defects in the prior art, realizing comprehensive quantitative analysis on the performance of a prosthesis and improving the scientificity and the accuracy of the design of the prosthesis. One embodiment of the present application provides a knee prosthesis evaluation method, the method comprising: According to biomechanical performance test data, medical image data and postoperative motion capture data of a patient of the knee joint prosthesis, multi-modal data acquisition is carried out, data fusion is carried out by adopting a multi-modal alignment network based on deep learning, and data from different sources are mapped into a unified space-time coordinate system to obtain a fused multi-modal data set; Inputting the multi-modal data set into a self-adaptive convolution neural network, and extracting mechanical characteristics of the knee joint prosthesis under dynamic load, wherein the self-adaptive convolution neural network captures stress distribution and strain change of the prosthesis in different movement stages through time convolution to obtain a dynamic mechanical characteristic matrix; inputting the dynamic mechanical feature matrix into a performance evaluation model based on a multi-scale attention mechanism, and performing quantitative analysis on the performance of the prosthesis, wherein the performance evaluation model evaluates the stability, wear resistance and biocompatibility of the prosthesis by combining local features and global features, and generates a performance evaluation report. Optionally, the acquiring multi-modal data according to biomechanical performance test data, medical image data and postoperative motion capture data of the knee joint prosthesis, performing data fusion by adopting a multi-modal alignment network based on deep learning, mapping data from different sources into a unified space-time coordinate system, and obtaining a fused multi-modal data set, including: according to biomechanical performance test data, medical image data and postoperative motion capture data of a knee joint prosthesis, a data acquisition frame based on edge calculation is adopted to acquire multi-mode data in real time, and the real-time performance and the continuity of data acquisition are ensured through a lightweight data caching mechanism; For multi-mode data, adopting a format recognition model based on deep learning to automatically recognize format types of different data sources, and carrying out noise filtering and missing value filling on the data through a self-adaptive data cleaning algorithm to generate a preliminary standardized data set; For the preliminary standardized data set, mapping data from different sources into a unified space-time coordinate system by adopting a multi-modal alignment network based on deep learning, and eliminating time and space differences among the data by a time stamp alignment algorithm and a space registration technology to generate a preliminary aligned multi-modal data set; and carrying out weighted fusion on the characteristics of biomechanical property data, medical image data and motion capture data by adopting a characteristic fusion method based on a multi-head attention mechanism on the preliminarily aligned multi-mode data set, capturing the association relationship among different data sources through a cross-mode attention mechanism, and generating a fused multi-mode data set. Optionally, the inputting the mu