Search

CN-121999665-A - Uropoiesis ostomy nursing simulation teaching method and system based on mixed reality

CN121999665ACN 121999665 ACN121999665 ACN 121999665ACN-121999665-A

Abstract

The invention provides a urostomy nursing simulation teaching method and system based on mixed reality, wherein the method comprises the steps of collecting scene point cloud data and a urostomy three-dimensional model, and simplifying the urostomy three-dimensional model to obtain a simplified urostomy model; registering and registering scene point cloud data and a simplified urostomy model to superimpose the simplified urostomy model at a corresponding position of a human body abdomen model so as to obtain an initial teaching scene, capturing model surface pits of the initial teaching scene and obtaining teaching scene point clouds corresponding to a current frame during nursing simulation teaching, updating the simplified urostomy model in the initial teaching scene based on the teaching scene point clouds so as to obtain an updated teaching scene, and completing urostomy nursing simulation teaching based on the updated teaching scene and combining staring, gesture and voice interaction.

Inventors

  • YE FAN
  • ZENG TAO
  • WAN LIWEN

Assignees

  • 南昌大学第二附属医院

Dates

Publication Date
20260508
Application Date
20260402

Claims (10)

  1. 1. The uropoiesis ostomy nursing simulation teaching method based on mixed reality is characterized by comprising the following steps of: Acquiring scene point cloud data containing a human abdomen model, importing a urostomy three-dimensional model, and simplifying the urostomy three-dimensional model to obtain a simplified urostomy model; Registering and registering the scene point cloud data and the simplified urostomy model to superimpose the simplified urostomy model on a corresponding position of a human abdomen model so as to obtain an initial teaching scene; capturing a model surface depression of the initial teaching scene and acquiring teaching scene point clouds corresponding to a current frame during nursing simulation teaching, and updating a simplified urinary stoma model in the initial teaching scene based on the teaching scene point clouds to obtain an updated teaching scene; Based on the updated teaching scene and combined with staring, gesture and voice interaction, the urostomy nursing simulation teaching is completed.
  2. 2. The mixed reality-based urostomy care simulation teaching method according to claim 1, wherein the step of simplifying the urostomy three-dimensional model to obtain a simplified urostomy model specifically comprises: Extracting a triangular patch set in the urostomy model, determining a unit normal vector of the triangular patch in each triangular patch set, and calculating each vertex in the urostomy model based on the unit normal vector Error matrix of (a) : ; In the formula, Is the vertex Is a first order neighborhood triangular patch set, Is that The first of (3) A first order neighborhood triangular patch, Is that Is a constant term in the plane equation of (a), Is that Is used to determine the unit normal vector of (c), Transpose the symbol; Calculate each vertex Peak normal variation value : ; In the formula, Is the vertex Normal vector of (2); All and vertex The sum of the areas of all triangular patches associated as the neighborhood area For each edge to be folded Calculating a new folded vertex based on the neighborhood area, the vertex normal variation value and the error matrix Folding cost of (2) : ; In the formula, Two vertices of the edge to be folded respectively, Respectively is the vertex Is used to determine the normal variation value of the vertex, Respectively is the vertex Is used for the error matrix of the (c), Respectively is the vertex Is a neighborhood area of (a); and minimizing the folding cost, determining the optimal position of the new vertex, selecting the edge with the minimum folding cost for folding, and iteratively repeating the folding process until the target simplification requirement is met, so as to output the simplified urostomy model.
  3. 3. The mixed reality-based urostomy care simulation teaching method according to claim 1, wherein the step of registering the scene point cloud data with the simplified urostomy model to superimpose the simplified urostomy model on a corresponding position of a human abdomen model to obtain an initial teaching scene comprises: preprocessing the scene point cloud data and the simplified urology ostomy model to obtain processed scene point cloud data and processed model point cloud data; randomly selecting a group of four point clouds which are coplanar and are not collinear with any three points from the processing scene point cloud data to obtain a reference point cloud set Calculating a first ratio based on the reference point cloud set And a second ratio of : , ; In the formula, Four point clouds in the reference point cloud set respectively, Is straight line And straight line Is a cross point of (2); arbitrarily selecting two points in the processing model point cloud data 、 Determining a first expected intersection point based on the first ratio and the second ratio Intersection with a second expected point : , ; Matching all possible expected intersection points in the processing model point cloud data, if two pairs of points exist 、 Causing two pairs of calculated predicted intersections Will be assembled A matching point cloud set as a reference point cloud set; determining a rigid body transformation matrix by adopting a least square method according to a plurality of matching point clouds; the rigid body transformation matrix acts on the point cloud in the processing model point cloud data to obtain a preliminary registration model point cloud; and determining an initial teaching scene based on the initial registration model point cloud.
  4. 4. A mixed reality based urostomy care simulation teaching method according to claim 3, characterized in that the step of determining an initial teaching scenario based on the preliminary registration model point cloud comprises: For each point in the processing scene point cloud data Searching for satisfaction in the primary registration model point cloud by using a tree structure algorithm Corresponding points of the model of (2) To obtain a plurality of point cloud pairs ; Determining the distance between two points in the point cloud pair, and providing a point cloud pair with the distance larger than a preset distance threshold value to obtain a screening point cloud pair ; Determining an objective function based on the screening point cloud pairs : ; In the formula, Respectively a rotation matrix and a translation matrix; minimizing the objective function to obtain a target rotation matrix and a target translation matrix by solving, and enabling the target rotation matrix and the target translation matrix to act on a simplified urostomy model to obtain an updated model point cloud; iteratively repeating the process of determining the screening point cloud pair and determining the objective function to update the objective rotation matrix and the objective translation matrix until the iteration stopping condition is met, and outputting a final objective rotation matrix and a final objective translation matrix; Registering the initial registration model point cloud into the scene based on the final target rotation matrix and the target translation matrix to obtain an initial teaching scene.
  5. 5. The mixed reality-based urostomy care simulation teaching method according to claim 1, wherein the step of updating the simplified urostomy model in the initial teaching scene based on the teaching scene point cloud to obtain an updated teaching scene comprises: Respectively extracting the current frame scene point clouds from the teaching scene point clouds Model point cloud with current frame Determining normal vectors of the current frame scene point cloud and the current frame model point cloud respectively; Determining a multidimensional scene point cloud based on the current frame scene point cloud and a normal vector of the current frame model point cloud And multidimensional model point cloud : ; ; ;; In the formula, Respectively representing normal vectors of the scene point cloud of the current frame and the model point cloud of the current frame, Respectively representing the credible normal vectors of the scene point cloud of the previous frame and the model point cloud of the current frame, Representing the credibility of the point cloud, Representing the curvature of a point of the current frame scene point cloud and the current frame model point cloud, Is a curvature threshold; Determining a generated density function of the multi-dimensional scene point cloud : ; ; ; ; ; In the formula, In order to evenly distribute the weights the weight is distributed, The points of the trusted normal vector and the untrusted normal vector in the point cloud of the current frame scene are respectively, The points of the trusted normal vector and the untrusted normal vector in the point cloud of the current frame model are respectively, Is a model parameter The corresponding multidimensional scene point cloud is the first The individual points are matched into the point cloud of the multidimensional model A probability density function of the individual points, The probability density functions of the position vector and the direction vector under the anisotropy, Is the first in the multidimensional scene point cloud Point and multidimensional model point cloud The displacement vector of the matching of the individual points, In the form of a covariance matrix, In order for the concentration to be a degree of concentration, Is a non-rigid transformation displacement function; Based on the generated density function Determining a target parametric model : ; ; ; In the formula, Model parameters before and after updating respectively, As a function of the weight of the material, Is the first in the multidimensional scene point cloud The individual points are matched into the point cloud of the multidimensional model Posterior probability between the points of interest, For model parameters before updating The corresponding multidimensional scene point cloud is the first The individual points are matched into the point cloud of the multidimensional model A probability density function of the individual points, For model parameters before updating A corresponding function of the generated density is provided, For updated model parameters The corresponding multidimensional scene point cloud is the first The individual points are matched into the point cloud of the multidimensional model A probability density function of the individual points, For the regularization coefficient(s), Is that Norms in the reproducing core Hibert space, The proportion of the reliable normal vector in the multidimensional scene point cloud is calculated; updating parameters based on the target parametric model : ; In the formula, In order to update the covariance matrix, Is the sum of the trusted and untrusted posterior probabilities; updating parameters based on the target parametric model : , ; In the formula, For the updated non-rigid transformation displacement function, As a matrix of weights, the weight matrix, In the form of a Gram matrix, the matrix is, As a matrix of posterior probabilities, Is a vector with all of the elements being 1, Respectively vectors formed by points in the multi-dimensional scene point cloud and points in the multi-dimensional model point cloud; updating parameters based on the target parametric model : ; In the formula, In order to update the concentration level after the update, Is the sum of the untrusted posterior probabilities; in the process of updating the iterative repeated model parameters, until the iterative stop condition is met, outputting a non-rigid transformation displacement function updated by the last iteration, and determining a final model point cloud based on the non-rigid transformation displacement function updated by the last iteration : ; In the formula, The non-rigid transformation displacement function after the last iteration update is adopted; Cloud the final model point Registering in the multi-dimensional scene point cloud to obtain an updated teaching scene.
  6. 6. The mixed reality-based urostomy care simulation teaching method according to claim 1, wherein the step of completing urostomy care simulation teaching based on the updated teaching scene in combination with gaze, gesture and voice interaction comprises: The method comprises the steps of rendering a model in an updated teaching scene, accessing a multi-mode interaction interface comprising staring, gestures and voices, acquiring a user sight ray in real time, triggering an interaction event when the ray intersects a virtual object and meets a triggering condition, tracking the position of a hand joint point, identifying a preset gesture, executing corresponding operation when the specific gesture is detected and collides with the virtual object, identifying a preset voice command, executing corresponding teaching flow control or information display after successful identification, and integrating the interactions to simulate a complete urostomy nursing flow so as to complete urostomy nursing simulation teaching.
  7. 7. Uropoiesis is made mouth nursing simulation teaching system based on mixed reality, characterized in that, the system includes: The simplification module is used for collecting scene point cloud data containing a human abdomen model and importing a urostomy three-dimensional model, and simplifying the urostomy three-dimensional model to obtain a simplified urostomy model; the registration module is used for registering the scene point cloud data and the simplified urostomy model so as to superimpose the simplified urostomy model on the corresponding position of the human abdomen model to obtain an initial teaching scene; The updating module is used for capturing the model surface depression of the initial teaching scene and acquiring teaching scene point clouds corresponding to the current frame when nursing simulation teaching is performed, and updating the simplified urostomy model in the initial teaching scene based on the teaching scene point clouds so as to obtain an updated teaching scene; and the teaching module is used for completing urinary stoma nursing simulation teaching based on the updated teaching scene and combining staring, gesture and voice interaction.
  8. 8. The mixed reality-based urostomy care simulation teaching system of claim 7, wherein the teaching module is specifically configured to: The method comprises the steps of rendering a model in an updated teaching scene, accessing a multi-mode interaction interface comprising staring, gestures and voices, acquiring a user sight ray in real time, triggering an interaction event when the ray intersects a virtual object and meets a triggering condition, tracking the position of a hand joint point, identifying a preset gesture, executing corresponding operation when the specific gesture is detected and collides with the virtual object, identifying a preset voice command, executing corresponding teaching flow control or information display after successful identification, and integrating the interactions to simulate a complete urostomy nursing flow so as to complete urostomy nursing simulation teaching.
  9. 9. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the mixed reality based urostomy care simulation teaching method according to any of claims 1-6 when executing the computer program.
  10. 10. A storage medium having stored thereon a computer program which when executed by a processor implements a mixed reality based urostomy care simulation teaching method according to any of claims 1 to 6.

Description

Uropoiesis ostomy nursing simulation teaching method and system based on mixed reality Technical Field The invention belongs to the technical field of medical teaching and mixed reality, and particularly relates to a urinary ostomy nursing simulation teaching method and system based on mixed reality. Background Urostomy is a common surgical technique for urosurgery, and daily care of the postoperative stoma is important for preventing infection and improving the life quality of patients. However, traditional care teaching relies mainly on books, video or silica gel models, with the following disadvantages: The traditional teaching mode and practice machine have the limit that the teacher and the apprentice training depend on the guidance of the teacher with abundant experience, the trainee is difficult to obtain sufficient practice opportunities, and the teaching styles and evaluation standards of different teachers are different, so that the continuity and consistency of skill mastering are affected. Beginners often experience inadequate operation and may increase the risk of complications to the patient. The simulation training has the limitation that the physical model has the problems of unrealistic anatomical structure, easy damage, low touch fidelity and the like, and the high-fidelity simulator has high cost and is difficult to popularize. Although virtual reality training can provide a three-dimensional environment, there are limitations in reproducing clinical variability and complexity, and insufficient haptic feedback reduces operational realism, affecting skill migration effects. Visualization of anatomical structures is difficult, the structure of the abdominal wall around the stoma is complex, traditional teaching relies on two-dimensional images and verbal descriptions, and it is difficult to build a three-dimensional spatial understanding. Internal structures such as blood vessels, muscle layers and the like cannot be directly observed, and students need to rely on a large amount of exercise accumulation experience, so that a learning curve is prolonged. The prior mixed reality technology has the defects that although the mixed reality technology is applied to medical training, the prior scheme has low registration precision and high delay under a dynamic scene, and virtual information is easy to drift when a student moves a visual angle or presses a model. The three-dimensional medical model has obvious inconsistency with RGBD point cloud data, the data volume difference is large, the noise is difficult to control, the robustness of the traditional ICP method is insufficient when the noise and the outlier are processed, and particularly, the registration of the surface features such as the abdomen and the like or the deformable regions is difficult. Disclosure of Invention In order to solve the technical problems, the invention provides a urinary ostomy nursing simulation teaching method and system based on mixed reality, which are used for solving the technical problems in the prior art. In a first aspect, the present invention provides the following technical solutions, which are a urinary ostomy care simulation teaching method based on mixed reality, including: Acquiring scene point cloud data containing a human abdomen model, importing a urostomy three-dimensional model, and simplifying the urostomy three-dimensional model to obtain a simplified urostomy model; Registering and registering the scene point cloud data and the simplified urostomy model to superimpose the simplified urostomy model on a corresponding position of a human abdomen model so as to obtain an initial teaching scene; capturing a model surface depression of the initial teaching scene and acquiring teaching scene point clouds corresponding to a current frame during nursing simulation teaching, and updating a simplified urinary stoma model in the initial teaching scene based on the teaching scene point clouds to obtain an updated teaching scene; Based on the updated teaching scene and combined with staring, gesture and voice interaction, the urostomy nursing simulation teaching is completed. Compared with the prior art, the invention registers the scene and the model, solves the virtual-real drift problem under the dynamic scene, can effectively distinguish the credible and the unreliable direction vectors, improves the registration robustness, ensures that the virtual model stably fits the real object when a student moves at a visual angle, can simulate skin pressing deformation in real time through subsequent updating of the model point cloud, enables the virtual stoma to be more similar to the real tissue in vision and touch sense, enables the student to practice repeatedly in a risk-free environment, greatly improves teaching realism and immersion sense, simplifies the model by introducing curvature and neighborhood area weighting, reduces the number of triangular patches on the premise of retaining the characterist