Search

EP-4139906-B1 - METHOD FOR VISUALIZING AT LEAST A ZONE OF AN OBJECT IN AT LEAST ONE INTERFACE

EP4139906B1EP 4139906 B1EP4139906 B1EP 4139906B1EP-4139906-B1

Inventors

  • EL BEHEIRY, Mohamed
  • MASSON, Jean-Baptiste

Dates

Publication Date
20260513
Application Date
20220316

Claims (16)

  1. Method implemented by computer means for visualizing at least a zone of an object in at least one interface, said method comprising the following steps: - obtaining at least one image of said zone, said image comprising at least one channel, said image being a 2-dimensional or 3-dimensional image comprising pixels or voxels, a value being associated to each channel of each pixel or voxel of said image, a representation of said image being displayed in the interface, - obtaining at least one annotation from a user, said annotation defining a group of selected pixels or voxels of said image, - calculating a transfer function based on said selected pixels or voxels and applying said transfer function to the values of each channel of the image, - updating said representation of the image in the interface, in which the colour and the transparency of the pixels or voxels of said representation are dependent on the transfer function, wherein said method comprises the following steps: - obtaining at least one 2-dimensional or 2D image of said zone, said 2D image comprising pixels and at least one channel, a value being associated to each channel of each pixel of said 2-dimensional image, a representation of said 2D image being displayed in a first interface, - obtaining at least one 3-dimensional or 3D image of said zone, said 3D image comprising voxels and at least one channel, a value being associated to each channel of each voxel of said 3D image, at least some of the voxels of the 3D image corresponding to some pixels of the 2D image, a representation of said 3D image being displayed in a second interface, - obtaining at least one annotation from a user, said annotation defining a group of selected pixels of said 2D image or a group of selected voxels of said 3D image, - propagating the selection of said group of pixels or voxels selected in the 2D or 3D image to the 3D or 2D image, respectively, by selecting the voxels or the pixels of said 3D or 2D image that corresponds to the selected pixels or voxels of said 2D or 3D image, respectively, - calculating a first transfer function based on said selected pixels of said 2D image and applying said first transfer function to the values of each channel of the 2D image, - updating the representation of the 2D image in the first interface, in which the colour and the transparency of the pixels of said representation are dependent on the first transfer function, - calculating a second transfer function based on said selected voxels of said 3D image and applying said second transfer function to the values of each channel of the 3D image, - updating the representation of the 3D image in the second interface, in which the colour and the transparency of the voxels of said representation are dependent on the second transfer function, wherein the group of selected pixels or voxels of the corresponding image is updated by obtaining at least one additional annotation from a user through the at least one interface, the additional annotation being transferred from one interface to another and the transfer function being recalculated at each additional annotation of the user and for each interface, leading to an update of the representations of the images on both interfaces in an interactive process.
  2. Method according to any of the preceding claims, wherein at least one of the transfer function is calculated according to the following steps: - selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, - creating a first feature tensor and a second feature tensor on the basis of pixels or voxels of the first and second domains of interest A and B, respectively, - defining a statistical test that would differentiate the first domain A from the second domain B through the optimal Maximum Mean Discrepancy (MMD) of the statistics of features associated to the first domain A and the second domain B, - defining, for each pixel or voxel of the image, the colour C of said pixel or voxel with the following equation: C v = f * v − min f * v max f * v − min f * v where f* (v) is the witness function of the value of the pixel or voxel v, defined by f * v ∝ μ P − μ Q = 1 m ∑ i = 1 m k x i v − 1 n ∑ j = 1 n k y j v where k is a kernel defining a value representative of the distance between the features associated to pixel or voxel x i belonging to domain A and yj and the features associated to the pixel or voxel v, with m the number of pixel or voxel in A and n the number of pixel or voxel in domain B where k is a kernel defining a value representative of the distance between the features associated to pixel or voxel x i or y i and the features associated to the pixel or voxel v, - defining for each pixel or voxel of the image, the transparency T of said pixel or voxel with the following equation: T v = h A v + h B v Z A , B where: h A ( v ) is the smoothed density of the features associated to voxel v of the first domain A with the kernel k , h B ( v )is the smoothed density of the features associated to voxel v of the second domain B with the kernel k , and Z A,B is a normalising constant ensuring that max( Z A , B ) = c with c ≤ 1 a predetermined constant defining the maximal transparency factor .
  3. Method according to any of the preceding claims, wherein at least one of the transfer function is calculated according to the following steps: - selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, - creating a first feature tensor and a second feature tensor on the basis of the first and second domains of interest A and B, respectively, - sample pixels or voxel in the first and second domains of interest A and B to have have n A ∗ = n B ∗ and 2 n A ≤ n max , where n A is the number of sample pixels or voxels in the first domain A and n B is the number of sample pixels or voxels in the second domain B and where n max is a predetermined value, - defining, for each pixel or voxel of the image, the colour C of said pixel or voxel C v = f * v − min f * v max f * v − min f * v with f*(v) = ( βp A ( v ) + 1)g(v) as the nomalized product of the shifted probability for said pixel or voxel to belong to domain A by the value of one feature, g(v) of said pixel or voxel v . - defining for each pixel or voxel of the image, the transparency T of said pixel or voxel with the following equation: - T v = h A v + h B v Z A , B - where: - h A ( v ) = (k * p)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the first domain A with the kernel k , - h B ( v ) = (k * p)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the second domain B with the kernel k, and - Z A,B is a normalising constant ensuring that max( Z A , B ) = c with c ≤ 1 a predetermined constant defining the maximal transparency factor .
  4. Method according to claim 2 or 3, wherein each feature tensor defines, for each pixel of the corresponding domain of interest, at least one feature value selected from the following list of features: • the value of the pixel or voxel v, • ∇ l v the regularised gradient (over scale l ) of the pixel or voxel values, wherein regularisartion is performed by Gaussian convolution ∇ l v = ∇ ( * I ) with the Gaussian of null averaged value and standarddeviation of l • S l ( v ) the entropy of a patch of size l around pixel or voxel v, • d(v) = ( * I )( v ) - ( * I )( v ) difference of convoluted images at pixel or voxel v, where ( l 1 , l 2 ) are the two scales associated to the Gaussian I is the image stack, • σ l ( v ) the standard deviation of the patch of size l centred on pixel or voxel v, • KL l,m ( v ) the Kullback-Liebler distance between the patch of size l and the surrounding patch of size l + m centred on pixel or voxel v, • µ̃ l ( v ) the median values of the voxels in the patch of size l centred on pixel or voxel υ , • ∇ log ( * I ) the logarithmic derivative of the convolved image at pixel or voxel v, • d p-UMAP,l,m ( v ) the low dimensional euclidean distance in the latent space generate by a parametric UMAP for a patch of size l and the surrounding patch of size l + m centred on pixel or voxel v. • (r, θ ) p-UMAP ( v ) the polar coordinates of the pixel or voxel v on the latent space generate by a parametric UMAP centres on pixel or voxel v, • S p-UMAP,l ( v ) the convex hull surface of the domain of size l around pixel v
  5. Method according to any of the claims 2 to 4, wherein said kernel k is selected from the following list of kernels: - k G x , x ′ = σ 2 exp − x − x ′ 2 2 l 2 , - k Per x , x ′ = σ 2 exp − 2 sin 2 π x − x ′ p l 2 , - k lin x , x ′ = σ b 2 + σ v 2 x − l x ′ − l - k cau x , x ′ = σ 2 1 + x − x ′ 2 2 αl 2 − α - k exp x , x ′ = exp − r l γ where: - x and x' are features of the corresponding pixels or voxels- { σ, l, p, σ b , σ v , α, γ } are hyper-parameters of the kernels, these parameters being predefined, or set automatically or by the user.
  6. Method according to any of the preceding claims, wherein the action generated on one of the first and second interface is transmitted to the other interface through at least one manager storing data into a memory, said data comprising at least one parameter representative of said action, the first representation and/or second representation being updated on the basis of the stored data and the set of original images.
  7. Method according to the preceding claim, wherein each manager implements at least one or each of the following generic functions: - export data from a manager data storage to a format readable outside of an application, - import data from outside of the application to the manager data storage, - receive data from at least one interface and store said data into the manager data storage and update all other interfaces with a visual readout of this data inclusion, - remove data from at least one interface and remove said data from the manager data storage and update all other interfaces with a visual readout of this data removal, - update the visual readout of a given interface with the addition of a new visual element, - remove the visual readout of a given interface with the removal of an existing visual element.
  8. Method according to the preceding claim, wherein each data element stored has a unique identifier associated to it which is used to ensure synchronization between each interface.
  9. Method according to any of the preceding claims, wherein the representation of the 3D image is obtained through volume ray casting methods.
  10. Method according to any of the preceding claims, wherein the first interface is displayed on a computer screen and the second interface is displayed on a computer screen and/or on a display of a virtual reality device.
  11. A computer program, comprising instructions to implement at least a part of the method according to any of the preceding claims when the software is executed by a processor.
  12. Computer device comprising: - input means for receiving at least one image of a zone of an object, - a memory for storing at least instructions of a computer program according to preceding claim, - a processor accessing to the memory for reading the aforesaid instructions and executing then the method according to any of the claims 1 to 10, - interface means for displaying the representation of the image obtained by executing said method.
  13. A computer-readable non-transient recording medium on which a computer software is registered to implement the method according to any of the claims 1 to 10, when the computer software is executed by a processor.
  14. A method of generating a 3D model of a patient's anatomical structure wherein the method comprises : - implementing by computer means the method according to any one of claims 1 to 10 on a medical 3D-image of an object wherein the object is a patient's anatomical structure(s) comprising a zone of medical interest and the medical 3D-image is a magnetic image resonance (MRI) image, a Computed Tomography (CT) scan image, a Positron Emission Tomography (PET) scan image or a numerically processed ultrasound recordings image and - displaying a 3D model of the patient's anatomical structure including the zone of medical interest.
  15. The method according to the preceding claim, wherein a user provides at least one annotation in the medical 3D-image wherein the at least one annotation selects pixels or voxels in the zone of medical interest to improve vizualisation of said zone or/and visualization of the boundaries of the zone of interest and/or the at least one annotation selects pixels or voxels outside the zone of medical interest to enable image transformation by cropping or deletion of interfering structures such as surrounding tissues in the zone of interest, bones, muscles or blood vessels.
  16. The method according to claim 14 or 15, which is either performed on raw 3D image imaging data or performed on segmented 3D image data.

Description

The invention relates to a method for visualizing at least a zone of an object, more particularly for visualizing at least a zone of a patient, in at least one interface. Medicine takes advantage of medical imaging techniques to generate images of the interior of a patient body for clinical analysis and medical intervention, as well as visual representation of some organs or tissues. Medical imaging aims to reveal internal structures of the body hidden for example by the skin and bones, in order to diagnose and treat diseases or to prepare any surgery. Medical images, generated for example by Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) or Ultrasounds can be 2-dimensional (2D) or 3-dimensional (3D) images. A 3D image can be computed from a stack of 2D images or obtained directly. Each image comprises pixels (for 2D image) or voxels (for 3D image), a value being associated to each channel of each pixel or voxel of said image. An image may comprise only one channel, in the case of a monochromatic image for example. On the contrary, an image may comprise multiple channels, for example one for each primary colour (red, green, blue), in the case of colour image. The terms voxels and pixels may be used, by extension or analogy, to any of the 2D or 3D images. 2D medical images are often slice-based representations that allow radiologists to perform diagnoses, measure structures of interest and assess treatment strategies. Surgeons, however, benefit from detailed, to-scale representations of patients as "avatars" or "digital twins" in a natural 3D viewing context such as that provided by virtual and augmented reality immersive visualization technologies. Volume rendering of 3D medical images is often used in several healthcare contexts, but the quality of the rendered image along with the contrast between anatomical structures of interest strongly depends on the type of transfer function that is applied to the image. A transfer function applied to an image aims to modify the representation or rendering of said image, by modifying the colour (i.e. an emission characteristic) and the transparency (also called opacity, i.e. an absorption characteristic) of the pixels or voxels of said representation. Conventional software allow users to apply pre-defined transfer functions, which attribute optical properties to the pixels or voxels. Such pre-defined transfer functions are not always adapted to corresponding analysis. Manual adjustments of some hyper-parameters of the transfer functions are sometimes allowed by the software. However, manual design of said transfer function can be a tedious and time consuming task and does not necessary lead to the best optimization of the transfer function for the concerned case. An example is the software "3D slicer" where the video entitled "Volume Rendering in 3D Slicer | Introduction to Digital Preparation Video 9" by the author "The Virtual Paleontologist", URL https://www.youtube.com/watch?v=l8wlaCfYWG4, published online on 24-06-2020 shows the application and modification of a transfer function in a 3D view. There is also a need for facilitating the communication between medical specialists using different interfaces. In particular, radiologists are trained to use 2D viewing interface whereas surgeons are using a 3D interface which accurately resembles the patient in the operating room. Such interaction could benefit from both expertise for facilitating diseases diagnosis or surgery preparation. To these aims, the invention proposes a method implemented by computer means for visualizing at least a zone of an object in at least one interface, the steps of the method being specified in claim 1. The invention is applicable to medical image as well as to other types of images. The invention is not limited to visualizing at least a zone of patient body. Contrarily to the prior art, in the case of the invention, the transfer function is not predefined or modified only by manually fine-tuning the hyper-parameters of said transfer function. Indeed, in the invention, annotations of the image are used to adapt the transfer function and thus the resulting representation of the image. Such process is called semi-automatic definition or parametrization of the transfer function. Such ergonomic and dynamic modification of the transfer function is particularly efficient to quickly and optimally modify the transfer function on the basis of the content of the image. The interface may be a graphical user interface (GUI). The user may interact with the interface through at least one controller, for example a mouse, a keyboard and/or a virtual reality controller. Said method may comprises the following steps: obtaining at least one 2-dimensional or 2D image of said zone, said 2D image comprising pixels and at least one channel, a value being associated to each channel of each pixel of said 2-dimensional image, a representation of said 2D image being