Search

CN-121971286-A - Acupuncture point auxiliary positioning method based on image fusion

CN121971286ACN 121971286 ACN121971286 ACN 121971286ACN-121971286-A

Abstract

The invention relates to the technical field of acupoint positioning and computer image processing, and discloses an acupoint auxiliary positioning method based on image fusion, which comprises the steps of receiving an acupoint selection instruction input by a user and confirming a target acupoint; the method comprises the steps of obtaining a subcutaneous blood vessel distribution image and a body surface image of a target area of a target acupoint, fusing the subcutaneous blood vessel distribution image and the body surface image to obtain a blood vessel-body surface fusion model of the target area, calculating target acupoint prediction coordinates based on the blood vessel-body surface fusion model and the blood vessel distribution-acupoint model, converting the calculated target acupoint prediction coordinates into laser projection instructions, controlling a laser emitter to project positioning light spots to the body surface, and outputting acupoint information.

Inventors

  • WU KEMEI
  • YAO JIANBIN
  • ZHAO ZIHAO
  • WANG SEN

Assignees

  • 郑州大学第五附属医院

Dates

Publication Date
20260505
Application Date
20260206

Claims (7)

  1. 1. An acupoint assisted positioning method based on image fusion is characterized by comprising the following steps: Receiving an acupoint selection instruction input by a user, and confirming a target acupoint; acquiring a subcutaneous vascularity image and a body surface image of a target area of a target acupoint; Fusing the subcutaneous blood vessel distribution image and the body surface image to obtain a blood vessel-body surface fusion model of the target area; Calculating target acupoint prediction coordinates based on the blood vessel-body surface fusion model and the blood vessel distribution-acupoint model; Converting the calculated target acupoint predicted coordinates into laser projection instructions, controlling a laser emitter to project positioning light spots to the body surface, and outputting acupoint information.
  2. 2. The method for assisting in positioning acupoints based on image fusion according to claim 1, wherein the step of receiving the acupoint selection instruction input by the user specifically comprises the steps of: The acupoint selection instruction comprises a voice instruction or a key input instruction.
  3. 3. The method for assisting in locating an acupoint based on image fusion according to claim 1, wherein the step of obtaining a subcutaneous vascularity image and a body surface image of a target region of a target acupoint comprises the following steps: Measuring working distances from the blood vessel imaging instrument and the binocular camera to a target area on the body surface of a patient in real time through a laser ranging sensor; Comparing and judging the working distance with a threshold range; if the working distance is within the threshold range, judging that the vascular imaging instrument and the binocular camera are in effective working positions, synchronously triggering the vascular imaging instrument and the binocular camera, and respectively acquiring the subcutaneous vascular distribution image and the body surface image of the target area; If the working distance is not within the threshold range, a voice guiding instruction is sent out until the working distance is within the threshold range; The laser ranging sensor and the binocular camera and the vascular imaging instrument keep a fixed pose relation.
  4. 4. The method for assisting in locating acupoints based on image fusion according to claim 3, wherein the step of fusing the subcutaneous vascularity image and the body surface image to obtain a vascular-body surface fusion model of a target area comprises the following steps: Performing three-dimensional reconstruction on the body surface image to generate a body surface three-dimensional point cloud; enhancing and extracting features of the subcutaneous blood vessel distribution image to obtain a blood vessel feature map, wherein the extracted features comprise topological structures, branch points and curvature features of a blood vessel network; Based on a preset fixed pose relationship between the vascular imaging instrument and the binocular camera, mapping the vascular feature map space to the body surface three-dimensional point cloud; Estimating the relative depth of the subcutaneous blood vessel according to the contrast ratio in the subcutaneous blood vessel distribution image, correlating with the body surface three-dimensional point cloud, and generating a blood vessel-body surface fusion model fusing body surface three-dimensional geometry, texture, subcutaneous blood vessel distribution and depth information.
  5. 5. The method for assisting in locating acupoints based on image fusion according to claim 4, wherein the mapping the vascular feature map to the three-dimensional point cloud based on a preset fixed pose relationship between the vascular imaging instrument and the binocular camera comprises the following steps: inputting the vascular feature map and the corresponding texture image of the body surface three-dimensional point cloud into a deformation field prediction model, and outputting a deformation field; The deformation field prediction model training objective function expression is: (1) In the case of the formula (1), As a measure of the similarity of the two images, To use deformation fields The image is subjected to a spatial transformation, As a regular term of the term, In order to balance the parameters of the device, In order to be a blood vessel characteristic diagram, Is a corresponding texture image of the three-dimensional point cloud of the body surface.
  6. 6. The method for assisting in positioning acupoints based on image fusion according to claim 1, wherein calculating target acupoint predicted coordinates based on a blood vessel-body surface fusion model and a blood vessel distribution-acupoint model specifically comprises: Inputting the data and the actual working distance in the blood vessel-body surface fusion model into the blood vessel distribution-acupoint model to obtain the target acupoint prediction coordinates.
  7. 7. The image fusion-based acupoint assisted positioning method of claim 6, wherein the process of constructing the vascularity-acupoint model is as follows: Collecting multiple groups of target area sample data of people of different ages, different body types and different sexes, and constructing a data set, wherein the data set comprises the subcutaneous blood vessel distribution image, the body surface three-dimensional point cloud data, the individualized blood vessel-body surface fusion model data, standard acupoint coordinates marked by a professional doctor, and working distances from a measured blood vessel imaging instrument and a binocular camera to a body surface target area of a patient; Carrying out standardized processing on the data set, removing abnormal samples, converting working distances and attitude angles of all samples into parameters under a standard coordinate system, carrying out grey processing on the subcutaneous vascular distribution image, and determining body surface reference contrast; Dividing the standardized data into a training set and a testing set, wherein the training set and the testing set are 7:3; the vascular feature coordinate is obtained through weighting the vascular branch point three-dimensional coordinate and the vascular curvature extreme point coordinate of the vascular feature map The vessel topological structure at least comprises the trend, branch level and connection relation of the vessel; downsampling and denoising the body surface three-dimensional point cloud, and identifying skeleton mark points and skin texture inflection points to obtain body surface feature coordinates ; The predicted expression of the acupuncture point is: (2) In the formula (2) of the present invention, Predicting coordinates for the target acupoint of the ith sample; , wherein, As a result of the depth calibration factor, Body surface reference contrast, C is vascular region contrast; 、 、 、 、 、 Is a characteristic weight system, and ; 、 、 The spatial attitude angle of the binocular camera and the target area; 、 、 In order to compensate for the term of the deviation, , wherein, In order to compensate for the coefficient of the coefficient, Positioning error standard deviation for the sample; Real-time working distance acquired by a laser ranging sensor; Defining a loss function by using an acupoint positioning error minimization target as follows: (3) In the formula (3) of the present invention, Standard acupoint coordinates marked for doctors, wherein N is the number of samples; iterative optimization of a loss function, updating by back propagation , , Up to When the training is smaller than a set threshold value or iterated to the maximum round, stopping training; And inputting the test set into the trained model, calculating the positioning error of each sample, and outputting model parameters when the average positioning error of the samples is smaller than a preset clinical precision threshold.

Description

Acupuncture point auxiliary positioning method based on image fusion Technical Field The invention relates to the technical field of acupoint positioning and computer image processing, in particular to an acupoint auxiliary positioning method based on image fusion. Background In clinical practice such as acupuncture, acupoint injection, external treatment of traditional Chinese medicine, accurate acupoint positioning is a basic stone for ensuring curative effect and safety. The traditional positioning method mainly depends on the practical experience of doctors, adopts a bone-degree size division method, a body surface marking method and the like, and has the inherent limitations of strong subjectivity, low standardization degree, difficulty in quantification and inheritance and the like. Along with the rapid development of computer technology sensor technology and digital information processing capability, the medical field gradually advances to an informatization and intelligent stage, and the requirements on 'precision, visualization and quantification' in clinical operation are increasingly highlighted. Under the background, partial electronic auxiliary acupoint positioning devices have been developed, and attempts are made to improve the objectivity and consistency of acupoint positioning by means of computer calculation, signal acquisition and data analysis. However, the prior art scheme still has a certain defect in practical application, and the comprehensive requirements of clinical accuracy, stability and usability are difficult to fully meet. Along with the development of technology, some electronic auxiliary positioning devices appear in the prior art, and aim to improve the objectivity and consistency of positioning. However, the prior art solutions still have the following drawbacks: (1) The method is characterized in that the method lacks individuation self-adaptation capability, the subcutaneous blood vessel distribution of a human body has obvious individual differences due to factors such as age, body type, sex, fat thickness and the like, and the positions of blood vessels and acupoints are closely related, the prior art scheme is mostly based on standard human body models for proportion conversion or only depends on unchanged body surface marks, and the key biological characteristic of an individual specific subcutaneous blood vessel structure cannot be incorporated into a positioning decision model, so that the positioning accuracy of the prior equipment is greatly reduced when facing patients with special body type or anatomical variation, and true individuation accurate medical treatment cannot be realized. (2) The positioning and operation are seriously disjointed, most of the current vision or image-based auxiliary systems can calculate the virtual position of the acupoint in a screen coordinate system through an image processing algorithm, however, the virtual coordinate cannot be directly mapped to the body surface of a living body of a patient in real time, a doctor has to repeatedly switch the sight between the screen and the body of the patient, and the position conversion and the searching are carried out by virtue of clinical experience. In view of the foregoing, there is a need for an acupoint assisted positioning method that can achieve seamless connection between positioning and operation, provide natural and efficient human-computer interaction, and have personalized self-adaptation capability. Disclosure of Invention (One) solving the technical problems Aiming at the defects of the prior art, the application provides an acupoint auxiliary positioning method based on image fusion. (II) technical scheme In order to solve the problems, the application provides the following technical scheme: An acupoint assisted positioning method based on image fusion comprises the following steps: Receiving an acupoint selection instruction input by a user, and confirming a target acupoint; acquiring a subcutaneous vascularity image and a body surface image of a target area of a target acupoint; Fusing the subcutaneous blood vessel distribution image and the body surface image to obtain a blood vessel-body surface fusion model of the target area; Calculating target acupoint prediction coordinates based on the blood vessel-body surface fusion model and the blood vessel distribution-acupoint model; Converting the calculated target acupoint predicted coordinates into laser projection instructions, controlling a laser emitter to project positioning light spots to the body surface, and outputting acupoint information. Preferably, the receiving the acupoint selection instruction input by the user specifically includes: The acupoint selection instruction comprises a voice instruction or a key input instruction. Preferably, the acquiring the subcutaneous vascularity image and the body surface image of the target area of the target acupoint specifically includes: Measuring working distances from t