Search

CN-110992394-B - Object tracking using image segmentation

CN110992394BCN 110992394 BCN110992394 BCN 110992394BCN-110992394-B

Abstract

Object tracking using image segmentation is disclosed. A captured image of the sample is obtained. A segmented image is generated based on the captured image. The segmented image indicates segments corresponding to the object of interest. One or more target objects are identified from the objects of interest in the segmented image. An object of interest may be identified that is most similar in location and/or shape to the target object shown in the previous image. Alternatively, an object of interest associated with a connection vector most similar to the connection vector connecting the target object in the previous image may be identified. And drawing a motion vector from the target object position in the previous image to the target object position in the segmented image. And moving a field of view of a microscope relative to the sample according to the movement vector to capture another image of the sample.

Inventors

  • P. Potuzek
  • E. Kolkmaz
  • R. Schoenmarx

Assignees

  • FEI 公司

Dates

Publication Date
20260505
Application Date
20190927
Priority Date
20181003

Claims (18)

  1. 1. A method for object tracking in a solid sample, comprising: obtaining a first image corresponding to the specimen, the first image corresponding to the specimen exhibiting a first cross-sectional surface of the specimen; identifying a first location in the first image corresponding to a target object; Physically cutting a thin slice from a bulk face of the specimen; obtaining a second image corresponding to the specimen, wherein the second image is captured by a microscope, the second image corresponding to the specimen exhibiting a second cross-sectional surface of the specimen; Applying an image segmentation technique to the second image to obtain a segmented image, wherein the segmented image indicates (a) a first set of segments corresponding to an object of interest and (b) a second set of segments not corresponding to any object of interest; Determining a particular one of the objects of interest shown in the segmented image that is associated with a highest similarity score of the target object shown in the first image as the target object shown in the segmented image; Identifying a second location in the segmented image corresponding to the target object; Determining a motion vector from the first location in the first image to the second location in the segmented image; Moving a field of view of the microscope relative to the sample according to the motion vector to capture a third image corresponding to the sample; Wherein the method is performed by at least one device comprising a hardware processor.
  2. 2. The method as recited in claim 1, further comprising: After moving the field of view of the microscope relative to the specimen according to the motion vector, capturing the third image corresponding to the specimen, the third image showing the second cross-sectional surface of the specimen.
  3. 3. The method as recited in claim 1, further comprising: obtaining the third image corresponding to the specimen, the third image showing the second cross-sectional surface of the specimen; Wherein the third image is captured by the microscope after the field of view of the microscope has been moved relative to the specimen according to the movement vector; a set of images is compiled that tracks the target object, the set of images including the third image but not the second image.
  4. 4. The method as recited in claim 1, further comprising: after moving the field of view of the microscope relative to the specimen according to the motion vector, capturing the third image corresponding to the specimen, the third image showing a third cross-sectional surface of the specimen.
  5. 5. The method as recited in claim 1, further comprising: Obtaining the third image corresponding to the specimen, the third image showing a third cross-sectional surface of the specimen; Wherein the third image is captured by the microscope after the field of view of the microscope has been moved relative to the specimen according to the movement vector; a set of images is compiled that tracks the target object, the set of images including the second image and the third image.
  6. 6. The method according to claim 1, wherein: The first image corresponding to the specimen shows the specimen at a first time interval; The second image corresponding to the specimen shows the specimen at a second time interval subsequent to the first time interval.
  7. 7. The method of claim 1, wherein determining the particular one of the objects of interest shown in the segmented image that is associated with the highest similarity score of the target object shown in the first image as the target object shown in the segmented image comprises: Determining that the particular object of interest is closest to the first location in the first image.
  8. 8. The method of claim 1, wherein determining the particular one of the objects of interest shown in the segmented image that is associated with the highest similarity score of the target object shown in the first image as the target object shown in the segmented image comprises: A first shape of the particular object of interest is determined to be most similar to a second shape of the target object shown in the first image.
  9. 9. The method of claim 1, wherein the first image is captured by the microscope.
  10. 10. The method of claim 1, wherein the first image is a segmented version of another image captured by the microscope.
  11. 11. The method of claim 1, wherein the image segmentation technique comprises using an Artificial Neural Network (ANN).
  12. 12. A method for object tracking in a solid sample, comprising: obtaining a first image corresponding to the specimen, the first image corresponding to the specimen exhibiting a first cross-sectional surface of the specimen; identifying a first set of vectors connecting a plurality of target objects shown in the first image; Identifying a first location in the first image corresponding to the plurality of target objects; Physically cutting a thin slice from a bulk face of the specimen; obtaining a second image corresponding to the specimen, wherein the second image is captured by a microscope, the second image corresponding to the specimen exhibiting a second cross-sectional surface of the specimen; Applying an image segmentation technique to the second image to obtain a segmented image, wherein the segmented image indicates (a) a first set of segments corresponding to an object of interest and (b) a second set of segments not corresponding to any object of interest; identifying a subset of the objects of interest shown in the segmented image; identifying sets of vectors respectively corresponding to the subgroups of the objects of interest shown in the segmented image; determining a particular set of vectors of the set of vectors associated with a minimum difference of the first set of vectors; determining a particular subset of the objects of interest from among the subset of the objects of interest connected by the particular set of vectors as the plurality of target objects; Identifying a second location in the segmented image corresponding to the plurality of target objects; Determining a motion vector from the first location in the first image to the second location in the segmented image; Moving a field of view of the microscope relative to the sample according to the motion vector to capture a third image corresponding to the sample; Wherein the method is performed by at least one device comprising a hardware processor.
  13. 13. The method according to claim 12, wherein: the first image corresponding to the target object shows the specimen at a first time interval; the second image corresponding to the target object shows the specimen at a second time interval subsequent to the first time interval.
  14. 14. The method as recited in claim 13, further comprising: after moving the field of view of the microscope relative to the specimen according to the movement vector, a third image corresponding to the specimen is captured, the third image showing the specimen a third time interval after the second time interval.
  15. 15. The method as recited in claim 13, further comprising: Obtaining a third image corresponding to the specimen, the third image showing the specimen at a third time interval subsequent to the second time interval; Wherein the third image is captured by the microscope after the field of view of the microscope has been moved relative to the specimen according to the movement vector; A set of images tracking the plurality of target objects is compiled, the set of images including the second image and the third image.
  16. 16. A non-transitory computer-readable medium comprising instructions that when executed by one or more hardware processors cause performance of the method of any one of claims 1-15.
  17. 17. A system for object tracking in a solid sample, comprising: At least one device comprising a hardware processor, and The system is configured to perform the method of any one of claims 1 to 15.
  18. 18. A system for object tracking in a solid sample comprising one or more means for performing the method of any one of claims 1 to 15.

Description

Object tracking using image segmentation Technical Field The present disclosure relates to specimen image acquisition. In particular, the present disclosure relates to object tracking using image segmentation. Background Microscopy is a field of technology that uses a microscope to better view objects that are difficult to see with the naked eye. The different branches of the microscope include, for example, optical microscopes, charged particle (electron and/or ion) microscopes and scanning probe microscopes. Charged particle microscopes involve the use of an accelerated charged particle beam as the illumination source. Types of electron microscopes include, for example, transmission electron microscopes, scanning transmission electron microscopes, and focused ion beam microscopes. The assembly of a Transmission Electron Microscope (TEM) comprises an electron optical column, a vacuum system, the necessary electronics (a lens source for focusing and deflecting the beam and a high voltage generator for the electron source) and control software. The electron optical column contains an electron gun on one end and a viewing device (e.g., a camera) on the other end. The electron beam is emitted from the electron gun and passes through a thin specimen to transport electrons that are collected, focused, and projected onto a viewing device. The entire electron path from the gun to the camera is under vacuum. Similar to a TEM, the components of a Scanning Electron Microscope (SEM) include an electron optical column, a vacuum system, the necessary electronics (a lens source for focusing and deflecting the beam and a high voltage generator for the electron source), and control software. The electron gun is located on one end of the electron optical column. The sample is located on the other end of the electron column. The electron beam from the electron gun is focused in a fine spot on the sample surface. The electron beam is scanned in a rectangular raster over the sample. The intensities of the various signals resulting from the interaction between the beam electrons and the sample are measured and stored in a computer memory. The stored values are then mapped to a change in brightness on the image display. Scanning Transmission Electron Microscopy (STEM) is similar to TEM in that an image is formed by electrons passing through a sufficiently thin sample. However, unlike TEM, STEM focuses the electron beam on a fine spot and then scans the electron beam over the sample in a raster illumination system. Focused ion beam microscopy (FIB microscopy) is similar to SEM, however FIB microscopy uses an ion beam rather than an electron beam. Examples of ion beam sources include Liquid Metal Ion Sources (LMIS), such as gallium ion sources. The microscope is associated with various configurable microscope parameters. Examples of microscope parameters of an SEM include acceleration voltage (voltage at which electrons are accelerated as they pass through an electron optical column), convergence angle of an electron beam, beam current, spot size (diameter of a beam spot on a sample), dwell time, and resolution. Different values of the various microscope parameters result in images of different quality and properties. For example, a higher magnification requires a smaller spot size. Higher signal-to-noise ratio and contrast resolution require greater beam current. However, reducing the spot size also reduces the beam current. Various methods may be used to obtain a three-dimensional (3D) rendering of the sample. By way of example, block face scanning electron microscopy involves mounting a specimen in a vacuum chamber of a microscope, capturing an image of the block face of the specimen using the microscope, cutting a thin slice from the block face, raising the specimen so that a new block face returns to the focal plane of the microscope, and capturing another image of the new block face. The process is repeated until the entire 3D volume has been captured. As another example, continuous section scanning electron microscopy involves cutting a sample into thin sections, mounting a first section in the vacuum chamber of the microscope, capturing an image of the surface of the first section, mounting the next section in the vacuum chamber of the microscope, capturing another image of the surface of the current section. The process is repeated until images of all slices are captured. The term "cross-sectional surface" as used herein refers to a portion of the specimen captured in each 2D image, e.g., a surface of a slab in a slab scanning electron microscopy or a slice in a continuous cross-sectional scanning electron microscopy. The 2D images of the cross-sectional surfaces are stacked together to generate a 3D rendering of the specimen. The 3D rendering may be presented at a user interface, printed onto paper, and/or otherwise provided to a user and/or another application. The 3D rendering may be presented as a 3D model and/or anim