EP-4736117-A2 - METHODS FOR ALIGNMENT OF SEQUENTIALLY STAINED AND CONSECUTIVE BIOLOGICAL IMAGES
Abstract
The disclosure provides methods for aligning a portion of a first image of a tissue with a portion of a second image of the tissue by obtaining one or more local transformations based on the portion of the first image and/or the portion of the second image, wherein each of the one or more local transformations is: (i) for aligning a patch of the first image to a patch of the second image matching the patch of the first image, and (ii) associated with the patch of the first image and/or with the patch of a second image; and determining a transformation for aligning the portion of the first image to the portion of the second image according to the one or more local transformations. Moreover, a corresponding device and software implementing functionality of the methods is provided.
Inventors
- BEN-DAVID, ODED
- ARBEL, ELAD
- BEN-DOR, AMIR
- AVIEL-RONEN, SARIT
Assignees
- Agilent Technologies, Inc.
Dates
- Publication Date
- 20260506
- Application Date
- 20240628
Claims (1)
- 20220031-02-039062-00155 CLAIMS 1. A method for aligning a portion of a first image of a tissue with a portion of a second image of the tissue comprising: obtaining one or more local transformations based on the portion of the first image and/or the portion of the second image, wherein each of the one or more local transformations is: for aligning a patch of the first image to a patch of the second image matching the patch of the first image, and associated with the patch of the first image and/or with the patch of the second image; and determining a transformation for aligning the portion of the first image to the portion of the second image according to the one or more local transformations. 2. The method according to claim 1, wherein said obtaining the one or more local transformations is performed by selecting the one or more local transformations from a set of local transformations. 3. The method according to claim 2, wherein each transformation from the set of local transformations is stored in association with a location within the first image and/or the second image which said local transformation has been derived. 4. The method according to claim 3, wherein said selecting the one or more local transformations includes selecting each of the one or more local transformations based on a condition; wherein the condition requires that: a location associated with said local transformation is in a distance relative to the portion of the first image smaller than a neighborhood threshold; and/or 20220031-02-039062-00155 the location associated with said local transformation is in a distance relative to the portion of the second image smaller than the neighborhood threshold. 5. The method according to claims 3 or 4, wherein said selecting the one or more local transformations comprises selecting one local transformation of which the associated location is closest to the portion of the first image and/or to the portion of the second image; and the transformation for aligning the portion of the first image to the portion of the second image is determined to be said one local transformation. 6. The method according to claims 3 or 4, wherein said selecting the one or more local transformations comprises selecting of K local transformations of which the associated location is closest to the portion of the first image and/or to the portion of the second image; K is larger than 1 and smaller than or equal to the number of the local transformations in the set of local transformations; and the transformation for aligning the portion of the first image to the portion of the second image is determined to be a function of the K local transformations. 7. The method according to claim 6, wherein said function is a weighted average and the weights are determined according to a distance between the location associated with the respective transformation and the location of the portion of the first image and/or the portion of the second image. 8. The method according to claims 3 or 4, wherein said selecting the one or more local transformations comprises selecting of K local transformations individually for each sample from a set of samples included in the portion of the first image; K is larger than 1 and smaller than or equal to the number of the local transformations in the set of local transformations; and 20220031-02-039062-00155 a transformation for aligning said sample of the portion of the first image to a sample of the portion of the second image is determined to be a function of the K local transformations. 9. The method according to claim 8, wherein said function is a weighted average and the weights are determined according to a distance between the location associated with the respective transformation and the location of said sample. 10. The method according to claims 8 or 9, wherein transformations for aligning respective samples belonging to the portion of the first image but not included in the set of samples are determined as a function of one or more transformations determined for one or more respective samples in the set of samples. 11. The method according to any one of claims 1 to 10, wherein the portion of the first image and/or the portion of the second image is a field of view (“FOV”), of the first image and/or the second image; and the FOV is selected by a user. 12. The method according to claim 11, wherein the FOV is selected by using a graphical user interface, GUI. 13. The method according to claims 11 or 12, further comprising: upon selection of a new FOV, automatically perform said determining the transformation for aligning the FOV of the first image to the FOV of the second image according to the one or more local transformations, and align the FOV of the first image to the FOV of the second image. 14. The method according to claim 13, wherein said determining the transformation is determining of a transformation common to all samples of the new FOV. 15. The method according to claim 13, wherein said determining the transformation is determining of a transformation common for a part of the new FOV that was not part of the preceding FOV. 20220031-02-039062-00155 16. The method according to any one of claims 11 to 15, including displaying the first image and/or second image, wherein the selection of the new FOV is performed by moving the displayed first image and/or second image including panning, zooming in/out, and/or moving to a pre-selected coordinate(s). 17. The method according to any of claims 2 to 16, comprising determining the set of local transformations comprises: • obtaining a set of pairs of matching key points in the first image and the second image; • for each pair in the set of pairs of matching key points: o determining a local transformation for aligning a first patch located at the key point of said pair in the first image with a second patch located at the key point of said pair in the second image, and o including the local transformation into the set of local transformations. 18. The method according to claim 17, wherein the pairs in the set of pairs of the matching key points are obtained in the first image at a first resolution and in the second image at the first resolution; and said determining a local transformation comprises: extracting the first patch located at the key point of said pair in the first image at a second resolution, and extract the second patch located at the key point of said pair in the second image at the second resolution, wherein the second resolution is higher than the first resolution. 19. The method according to claim 18, wherein said determining a local transformation further comprises: determining pairs of patch key points matching in the first patch and in the second patch; and 20220031-02-039062-00155 deriving said local transformation based on the pairs of patch key points as a Euclidean transformation. 20. The method according to any of claims 17 to 19, wherein said obtaining a set of pairs of matching key points in the first image and the second image comprises: obtaining a superset of pairs of matching key points; selecting, among the pairs of matching key points in the superset, a best matching pair of key points; moving the best matching pair of key points from the superset to said set; and repeating at least once the steps of: selecting, among the pairs of matching key points in the superset, a next best matching pair of key points that fulfills a predefined condition, wherein the predefined condition includes a condition according to which the next best matching pair of key points is to be located farther than a threshold distance from any pair of key points moved to said set; and moving the next best matching pair of key points from the superset to said set. 21. The method according to any of claims 17 to 19, wherein said obtaining a set of pairs of matching key points in the first image and the second image comprises receiving at least one or more of the set of pairs of matching key points from a user. 22. The method according to claim 21, wherein the receiving the at least one out of the set of pairs of matching key points from the user includes: displaying the first image and the second image so that the first image is pre-aligned with the second image; providing a user a graphical user interface for marking at least one position on the displayed first image and the second image; detecting the at least one positions within the displayed first image and second image marked using the interface; and 20220031-02-039062-00155 based on the detected at least one position, determining the at least one out of the set of pairs of matching key points. 23. The method according to any of claims 17 to 19, wherein said obtaining a set of pairs of matching key points in the first image and the second image comprises: dividing at least a portion of the first and/or second image into a pattern of image units; and determining the set of pairs of matching key points, such that one pair of matching key points is included per image unit. 24. The method according to any one of claims 2 to 23, wherein said determining the set of local transformations comprises: determining pairs of matching portions of the first image in the second image based on a similarity of image intensities between the first image and in the second image; deriving, for at least one of the pairs of matching portions, a local transformation for aligning a portion of said at least one of the pairs in the first image to the matching portion of said at least one of the pairs in the second portion; and including the local transformation in the set of local transformations. 25. The method according to any one of claims 2 to 24, wherein said determining the set of local transformations comprises: providing a user with a graphical user interface allowing a user to: display the first image and/or the second image at a user-selectable resolution, apply at least one of a desired rotation, translation, and/or scaling to the displayed first image and/or second image, and/or add a local transformation resulting from the at least one of desired rotation, translation, and/or scaling to the set of local transformations. 20220031-02-039062-00155 26. The method according to any one of claims 2 to 25, wherein said determining a transformation for aligning the portion of the first image to the portion of the second image comprises: determining a global transformation for aligning the first image to the second image, and calculating a function of the local transformations in the set of local transformations, wherein the local transformations are represented by matrices. 27. The method according to claim 26, wherein said function is an average. 28. The method according to any one of claims 17 to 27, wherein in said determining the set of local transformations, local transformations that include a rotation by more than a threshold angle are not included in the set; and the threshold angle is larger than 0. 29. The method according to any one of claims 1 to 28 comprising, before obtaining said one or more local transformations: scaling of the first image to match a ratio between a unit of length and a sample size in at least one direction; the at least one direction being a vertical and/or horizontal direction, and/or transforming the first image, wherein the transformation comprises flipping the first image around a vertical axis and/or flipping the first image around a horizontal axis. 30. The method according to any one of claims 1 to 29 comprising, before obtaining said one or more local transformations: distinguishing, in the first image, a foreground from a background, detecting, within the foreground, a largest connected region, and perform a rough alignment of the first image to the second image at a third resolution lower than a full resolution of the first image by aligning the largest connected region to the second image. 31. The method according to claim 30, wherein 20220031-02-039062-00155 the distinguishing the foreground from the background is performed by an entropy-threshold-based method which outputs a bitmap indicating, for each unit of the first image, whether said unit belongs either to the background or to the foreground. 32. The method according to claim 30 or 31, wherein said rough alignment of the largest connected region to the second image comprises: detecting, in the foreground of the first image at said third resolution, key points and feature descriptors associated respectively with the key points; detecting, in the second image at said third resolution, key points and feature descriptors associated respectively with the key points; obtaining rough-resolution pairs of key points by matching the key points and the feature descriptors detected in the first image with the key points and the feature descriptors detected in the second image; selecting a subset of the rough-resolution pairs of key points with the best matching; and deriving a rough transformation for said rough alignment based on the selected subset of the pairs of key points. 33. The method according to claim 32, wherein the rough transformation is a Euclidean transformation. 34. The method according to any one of claims 1 to 33, wherein the first image and the second image are images of the tissue, sequentially scanned and/or differently stained. 35. The method according to any one of claims 1 to 34, further comprising, after applying said transformation for aligning the portion of the first image to the portion of the second image, applying an additional alignment by optical flow to the first image. 36. A computer program stored on a non-transitory medium and comprising code instructions which, when executed on one or more processors perform the steps of the method according to any one of claims 1 to 35. 20220031-02-039062-00155 37. An apparatus for aligning a portion of a first image of a tissue with a portion of a second image of the tissue comprising processing circuitry configured to: obtain one or more local transformations based on the portion of the first image and/or the portion of the second image, wherein each of the one or more local transformations is: for aligning a patch of the first image to a patch of the second image matching the patch of the first image, and associated with the patch of the first image and/or with the patch of a second image; and determine a transformation for aligning the portion of the first image to the portion of the second image according to the one or more local transformations. 38. The apparatus according to claim 37, further comprising: a storage device storing the one or more local transformations, an input module configured to obtain the portion of the first image and/or the portion of the second image; and an output module configured to output images including the portion of the first image and/or the portion of the second image, wherein the processing circuitry is configured to obtain one or more local transformations from the storage device. 39. The apparatus according to claims 37 or 38, wherein the input module includes at least one key and/or a touch screen and/or a voice input device; and the output module includes a screen and/or the touch screen. 40. The apparatus according to claim 39, wherein the portion of the first image and/or the portion of the second image is a field of view, FOV, of the first image and/or the second image; the input module is configured to enable a user to select the FOV . 20220031-02-039062-00155 41. The apparatus according to claim 40, wherein the input module and the output module are at least in part implemented by a graphical user interface, GUI. 42. The apparatus according to claims 40 or 41, wherein: the processing circuitry is configured to: upon selection of a new FOV by a user using the input module, automatically perform said determining the transformation for aligning the FOV of the first image to the FOV of the second image according to the one or more local transformations, and obtain an aligned FOV of the first image by aligning the FOV of the first image to the FOV of the second image; and the output module is configured to output the aligned FOV. 43. The apparatus according to any one of claims 40 to 42, wherein the output module is configured to display the first image and/or second image, the input module is configured to receive an input from the user for said selecting of the new FOV by moving the displayed first image and/or second image, wherein moving comprises panning, zooming in/out, and/or moving to a pre-selected coordinate(s).
Description
20220031-02-039062-00155 Methods for Alignment of Sequentially Stained and Consecutive Biological Images Cross-Reference to Related Application [0001] This patent application claims priority to U.S. Provisional Patent Application No. 63/511,499, filed on June 30, 2023, which is herein incorporated by reference in its entirety. Field [0002] The present disclosure relates generally to methods and devices for use in detecting targets in biological tissues aided by imaging, and, in particular, to image alignment. Description of Related Art [0003] Histological specimens are frequently disposed upon glass slides as a thin slice of patient tissue fixed to the surface of each of the glass slides. Using a variety of chemical or biochemical processes, one or several colored compounds may be used to stain the tissue to differentiate cellular constituents, which can be further evaluated utilizing microscopy. Brightfield slide scanners are conventionally used to digitally analyze these slides. [0004] When analyzing tissue samples on a microscope slide, staining the tissue or certain parts of the tissue with a colored or fluorescent dye can aid the analysis. The ability to visualize or differentially identify microscopic structures is frequently enhanced using histological stains. Hematoxylin and eosin (H&E) stains are the most commonly used stains in light microscopy for histological samples. [0005] In addition to H&E stains, other stains or dyes have been applied to provide more specific staining and provide a more detailed view of tissue morphology. Immunohistochemistry ("IHC") stains have great specificity, as they use a peroxidase substrate or alkaline phosphatase ("AP") substrate for IHC stainings, providing a uniform staining pattern that appears to the viewer as a homogeneous color with intracellular resolution of cellular structures, e.g., membrane, cytoplasm, and nucleus. Formalin Fixed Paraffin Embedded ("FFPE") tissue samples, metaphase spreads or histological smears are typically analyzed by staining on a glass slide, where a particular biomarker, such as a protein or nucleic acid of interest, can be stained with 20220031-02-039062-00155 H&E and/or with a colored dye, hereafter "chromogen" or "chromogenic moiety." IHC staining is a common tool in evaluation of tissue samples for the presence of specific biomarkers. In situ hybridization ("ISH") may be used to detect target nucleic acids in a tissue sample. ISH may employ nucleic acids labeled with a directly detectable moiety, such as a fluorescent moiety, or an indirectly detectable moiety, such as a moiety recognized by an antibody which can then be utilized to generate a detectable signal. [0006] Compared to other detection techniques, such as radioactivity, chemo- luminescence or fluorescence, chromogens generally suffer from much lower sensitivity, but have the advantage of a permanent, plainly visible color which can be visually observed, such as with bright field microscopy. However, more substrates with additional properties which may be useful in various applications, including multiplexed assays, such as IHC or ISH assays, are needed. [0007] Additional capabilities for advanced image analysis of histological slides which may be used, for example, in digital pathology may improve detection and assessment of specific molecular markers, tissue features, and organelles, or the like. Achieving high accuracy has been a challenging issue not only for machine learning based algorithms but also for experienced professionals. [0008] One option for increasing accuracy is to use serial sections of tissue, where different biological markers are stained for in each of the two slides, which are then digitalized and aligned, and the staining pattern on one slide may be used to interpret the other slide. One problem with this approach is that the discrete digitized layers or sections may be spatially different (e.g., in terms of orientation, focus, signal strength, and/or the like) from one another. While image alignment may be broadly correct or accurate for larger or more distinct objects (such as bulk tumor detection), it may not have cell-level or organelle-level precision, or the like, since the same cells are not necessarily present on the two slides or images. Similar challenges may arise in analysis of sequentially stained tissue images. This can create challenges for such bio-technological image analysis, which may not be well suited for identifying single cells, organelles, and/or molecular markers, or the like. SUMMARY [0009] According to a first embodiment, a method is provided for aligning a portion of a first image of a tissue with a portion of a second image of the tissue comprising: (a) 20220031-02-039062-00155 obtaining one or more local transformations based on the portion of the first image and/or the portion of the second image, wherein each of the one or more local transformations is: (i) for aligning a patch of the first image to a patch