Search

EP-4738273-A1 - AUTOMATIC SYNCHRONISATION OF TWO INDEPENDENT IMAGE PROCESSING APPLICATIONS

EP4738273A1EP 4738273 A1EP4738273 A1EP 4738273A1EP-4738273-A1

Abstract

A computer-implemented method of image viewing and/or processing within an image viewing and/or processing environment, comprising the following steps: - obtaining screen capture content of at least part of an image display of the first processing application, wherein the image display is based on image data at least partially loaded into the first processing application; - automatically attempting to retrieve window characteristic information from the screen capture content based on at least one screen location of window characteristic information from the library of screen locations; - classifying the screen capture content: - as recognizable if window characteristic information can be retrieved; - as non-recognizable if the library of screen locations currently does not contain any screen locations and/or if no window characteristic information can be retrieved; - in case the screen capture content is classified as recognizable: - providing the window characteristic information to the second processing application; - synchronizing the window of the second processing application based on the received window characteristic information of the first processing application; in case the screen capture content is classified as non-recognizable: - performing a learning process to determine at least one screen location of window characteristic information on the image display based on changes happening over time on the image display and/or based on user input and storing the at least one screen location of window characteristic information in the library of screen locations.

Inventors

  • GOTMAN, SHLOMO

Assignees

  • Koninklijke Philips N.V.

Dates

Publication Date
20260506
Application Date
20241031

Claims (15)

  1. A computer-implemented method of image viewing and/or processing within an image viewing and/or processing environment, the environment comprising at least a first processing application (11) being configured to receive, process and display image data (33, 34) in a window (31, 32), a second processing application (12) being configured to receive, process and display image data in a window, a window characteristic recognition program (7) having access to a library (9) of screen locations of window characteristic information and comprising or having access to a window characteristic recognition algorithm (9) that is configured to recognize window characteristic information at a particular screen location, wherein the window characteristic information is information suitable to identify the characteristic of a window in which image data is displayed, the method comprising the following steps: - obtaining, by the window characteristic recognition program, screen capture content of at least part of an image display of the first processing application, wherein the image display is based on image data at least partially loaded into the first processing application; - automatically attempting to retrieve window characteristic information, by the window characteristic recognition program, from the screen capture content based on at least one screen location of window characteristic information from the library of screen locations; - classifying the screen capture content, by applying the window characteristic recognition algorithm: - as recognizable if window characteristic information can be retrieved; - as non-recognizable if the library of screen locations currently does not contain any screen locations and/or if no window characteristic information can be retrieved; in case the screen capture content is classified as recognizable: - providing the window characteristic information to the second processing application; - synchronizing the window of the second processing application based on the received window characteristic information of the first processing application; in case the screen capture content is classified as non-recognizable: - performing a learning process, by the window characteristic recognition program, to determine at least one screen location of window characteristic information on the image display based on changes happening over time on the image display and/or based on user input and storing the at least one screen location of window characteristic information in the library of screen locations.
  2. The method according to any of the preceding claims, wherein the step of synchronizing the window of the second processing application comprises the step of: - synchronizing by adjusting a zoom factor and/or panning setting for the image data (33, 34) within the window (31, 32) resulting in a similar field of view for the window of the first processing application (11) and the window of the second processing application (12).
  3. The method according to any of the preceding claims, wherein the step of synchronizing the window of the second processing application comprises the step of: - synchronizing by windowing the window (31, 32), resulting in a similar window of a grey-level mapping.
  4. The method according to any of the preceding claims, wherein the method further comprises the step of: - displaying the window (31) of the first processing application and the window (32) of the second processing application simultaneously.
  5. The method according to any of the preceding claims, wherein the step of performing the learning process is performed while the first processing application is running.
  6. The method according to any of the preceding claims, wherein the step of performing the learning process further comprises: - tracking changes on the image display that are due to user input and/or registering user input.
  7. The method according to any one of the preceding claims, wherein the step of performing the learning process comprises: - tracking a user input concerning a selection of image data (33, 34); - retrieving window characteristic information from the selected image data; and - searching for the window characteristic information on the screen capture content via the window characteristic recognition algorithm (8) to locate the screen location of the corresponding window characteristic information on the screen capture content.
  8. The method according to any one of the preceding claims, wherein the step of obtaining screen capture content comprises obtaining the screen capture content multiple times and/or continuously and wherein the step of performing the learning process comprises analysing one or multiple sections of the screen capture content that are changing over time and determining that or whether the at least one screen location of window characteristic information is within a changing section of the screen capture content.
  9. The method according to any of the preceding claims, wherein the step of performing the learning process comprises searching for the window characteristic information via the window characteristic recognition algorithm (8) in the one or multiple changing sections of the screen capture content.
  10. The method according to any one of the preceding claims, wherein the step of performing the learning process comprises: - providing a user interface that allows a user an option to select window characteristic information on the image display of the first processing application (11) to input at least one window characteristic information selection; - receiving the window characteristic information selection by the window characteristic recognition program (7); and - determining by the window characteristic recognition program and based on the window characteristic information selection, the at least one screen location of window characteristic information.
  11. The method according to any one of the preceding claims, wherein the window characteristic information comprises at least one or more of the following: - an image number (23), an image location, an image description, a slice number, a slice location (24), a slice description, a slice thickness (25), a series number (22), a series description, an accession number (21), a zoom factor (26), a window centre value (27), a window width value (28), a slice number in a series (29); and/or - wherein the image data comprises medical image data, in particular comprising at least one or more of the following: computed tomography image data, photon counting image data, spectral computed tomography image data, magnetic resonance image data, nuclear imaging data, positron emission tomography data, single photon emission computed tomography imaging data, ultrasound image data, digital pathology data and/or digital X-ray radiogrammetry data.
  12. The method according to any one of the preceding claims, wherein the method comprises the steps of: - storing the at least one screen location of window characteristic information in the library (9) of screen locations during the step of performing the learning process; - applying again the previous steps starting with the step of obtaining screen capture content or starting with the step of attempting to retrieve window characteristic information.
  13. A computer program (6) comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method according to any one of the preceding claims.
  14. An image visualisation and/or processing computing system (100) comprising a processing unit (1) adapted to perform the steps of the method according to any of the claims 1-12.
  15. The image visualization and/or processing computing system according to claim 14, comprising a data storage (4, 5) that comprises a library (9) of screen locations.

Description

FIELD OF THE INVENTION The invention relates to a computer-implemented method performed within an image viewing and/or processing environment, a corresponding computer program, and a corresponding image visualisation and/or processing computing system. BACKGROUND OF THE INVENTION Clinical diagnostic reading of medical images is typically performed by a user on a visualization system such as a PACS (Picture Archiving and Communication System) or a similar system (for example an advanced visualization workstation). While providing basic common viewing capabilities, general visualization systems, such as PACS, often lack the ability for advanced visualization and processing techniques. In addition, they may not be able to leverage proprietary, modality-vendor dependent information stored with the images, or proprietary image formats, like computational tomography (CT) spectral based images (SBI). Corresponding functionalities and/or further applications may, however, be provided by different, specific programs, e.g. programs provided by different vendors. To provide these additional capabilities, a possible solution is to provide an add-on application, e.g. a processing application, for example in the form of a software application that is running in the PACS environment (e.g., on a PACS workstation), as a client in a client-server architecture, or as a web page. These processing application may for example provide advanced visualization capabilities and/or computer aided diagnostic capabilities that go beyond the general visualization system. Accordingly, there may be two different applications running in the same environment but as two different applications. However, the two different applications may in many cases not be capable of sharing any information. This is particularly technically problematic in case the two processing applications are from two different vendors. It would, however, be desirable to enable such sharing of information in particular information on the characteristics of a window in which image data is displayed by the two different processing applications. For example, two different image processing applications from two different vendors run on the same workstation, but do not share any window characteristic information and are thus not synchronized. For instance, the two applications can display different images from the same series, using different zoom/pan, different windowing, etc. This means that if the user wants to compare the images, there is a need to do this separately on each application, jumping from application to another which means significant extra burden to the reading workflow. For example, sharing window characteristic information between processing applications may allow a user to experience an integrated environment where it easier to perform technical tasks. This could help the user, such as a radiologist, to save time and improve workflow efficiency while performing these tasks. One solution could be to provide a non-standard interface, such as an application programming interface (API) or command Unified Resource Locator (URL), to provide synchronization of windows between two processing applications. However, since there is no widely accepted standard for such interfaces, i.e. the interface is non-standard, this typically requires integration of two or more processing applications with each other. One processing application may for example be general processing application provided in a PACS environment and the other application may be an advanced visualization processing application provided by a different vendor. Integration, of such processing applications from different vendors may be technically problematic, or at least takes significant time to adapt one or both processing applications. This may be referred to as "tight integration" between applications. However, this approach is often not possible or technically very difficult to implement. It may take a lot of development time and typically requires adaptations of the integration each time any of the two applications is updated to a new version. Since these interfaces are not standard, this requires integration of the processing application with each PACS vendor separately. In addition, this method is used for one-time invocation of a 3rd party (non-PACS) application, rather than for real-time synchronization. Another approach could be to universally use only one uniform standard for all the applications, such as CCOW (Clinical Context Object Workgroup standard). However, a uniform standard that is used by all applications has not been established successfully so far and may be difficult or even impossible to establish in the future for various reasons such as technical reasons. It would therefore be desirable to find a more simple and practicable solution for the above-mentioned problems or at least find an alternative. It is desirable to enable a functional and easier to implement solution for synchr