US-12620075-B2 - Targeted application of deep learning to automated visual inspection equipment
Abstract
In a method for enhancing accuracy and efficiency in automated visual inspection of vessels, a vessel containing a sample is oriented such that a line scan camera has a profile view of an edge of a stopper of the vessel. A plurality of images of the edge of the stopper is captured by the first line scan camera while spinning the vessel, where each image of the plurality of images corresponds to a different rotational position of the vessel. A two-dimensional image of the edge of the stopper is generated based on at least the plurality of images, and pixels of the two-dimensional image are processed, by one or more processors executing an inference model that includes a trained neural network, to generate output data indicative of a likelihood that the sample is defective.
Inventors
- Neelima Chavali
- Brenda A. Torres
- Thomas C. Pearson
- Manuel A. Soto
- Jorge Delgado Torres
- Roberto C. Alvarado Rentas
- Javier O. Tapia
- Sandra Rodriguez-Toledo
- Eric R. Flores-Acosta
- Osvaldo Perez
Assignees
- AMGEN INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20201106
Claims (17)
- 1 . A method for enhancing accuracy and efficiency in automated visual inspection of vessels, the method comprising: orienting a vessel containing a liquid sample such that a line scan camera has a side view of an edge of a stopper of the vessel; spinning the vessel; capturing, by the line scan camera and while spinning the vessel, a plurality of images of the edge of the stopper, wherein each image of the plurality of images is a side view image and corresponds to a different rotational position of the vessel; wherein the line scan camera is angled upward relative to the horizontal plane to match or approximate a slope of the stopper; generating, by one or more processors and based on at least the plurality of images, a two-dimensional image of the edge of the stopper; and processing, by one or more processors executing an inference model that includes a trained neural network, pixels of the two-dimensional image to generate output data indicative of a likelihood that the liquid sample is defective, wherein the output data is indicative of whether the liquid sample includes one or more objects of a particular type or types, and wherein the trained neural network is configured to discriminate between gas-filled bubbles and particles in the liquid sample.
- 2 . The method of claim 1 , further comprising: causing, by one or more processors and based on the output data, the vessel to be selectively conveyed to a designated reject area.
- 3 . The method of claim 1 , wherein processing the pixels of the two-dimensional image includes applying intensity values associated with different pixels, or other values derived from the intensity values, to different nodes of an input layer of the trained neural network.
- 4 . The method of claim 1 , wherein the vessel is a syringe, the stopper is a plunger, and the edge of the stopper is an edge of a plunger dome that contacts the liquid sample.
- 5 . The method of claim 1 , wherein orienting the vessel includes one or both of: conveying the vessel using a motorized rotary table or starwheel; and inverting the vessel such that the stopper is beneath the liquid sample.
- 6 . The method of claim 1 , wherein the line scan camera is a first line scan camera, the plurality of images is a first plurality of images, the vessel is a first vessel, and the two-dimensional image is a first two-dimensional image, and wherein the method further comprises: while orienting the first vessel, also orienting a second vessel such that a second line scan camera has a side view of an edge of a stopper of the second vessel; while spinning the first vessel, spinning the second vessel; while capturing the first plurality of images, capturing, by the second line scan camera and while spinning the second vessel, a second plurality of images of the edge of the stopper of the second vessel, wherein each image of the second plurality of images is a side view image and corresponds to a different rotational position of the second vessel; and generating a second two-dimensional image based on at least the second plurality of images.
- 7 . The method of claim 1 , further comprising: prior to processing the pixels of the two-dimensional image, training the neural network using labeled two-dimensional images of stopper edges of vessels.
- 8 . The method of claim 7 , comprising training the neural network using labeled two-dimensional images of vessels containing liquid samples that include different types, numbers, sizes and positions of objects.
- 9 . An automated visual inspection system comprising: a line scan camera; conveying means for orienting a vessel containing a liquid sample such that the line scan camera has a side view of an edge of a stopper of the vessel; spinning means for spinning the vessel; and processing means for causing the line scan camera to capture, while the spinning means spins the vessel, a plurality of images of the edge of the stopper, wherein each image of the plurality of images is a side view image and corresponds to a different rotational position of the vessel, wherein the line scan camera is angled upward relative to the horizontal plane to match or approximate a slope of the stopper, generating, based on at least the plurality of images, a two-dimensional image of the edge of the stopper of the vessel, and processing, by executing an inference model that includes a trained neural network, pixels of the two-dimensional image to generate output data indicative of whether the liquid sample is acceptable, wherein the output data is indicative of whether the liquid sample includes one or more objects of a particular type or types, and wherein the trained neural network is configured to discriminate between gas-filled bubbles and particles in the liquid sample.
- 10 . The automated visual inspection system of claim 9 , wherein the processing means processes the pixels of the two-dimensional image at least by applying intensity values associated with different pixels, or other values derived from the intensity values, to different nodes of an input layer of the trained neural network.
- 11 . The automated visual inspection system of claim 9 , wherein the vessel is a syringe, the stopper is a plunger, and the edge of the stopper is an edge of a plunger dome that contacts the liquid sample.
- 12 . The automated visual inspection system of claim 9 , wherein one or both of: (i) the conveying means includes a motorized rotary table or starwheel, and orients the vessel at least by conveying the vessel using the motorized rotary table or starwheel; and (ii) the conveying means inverts the vessel such that the stopper is beneath the liquid sample.
- 13 . The automated visual inspection system of claim 9 , wherein: the line scan camera is a first line scan camera, the plurality of images is a first plurality of images, the vessel is a first vessel, the liquid sample is a first liquid sample, the conveying means is a first conveying means, the spinning means is a first spinning means, the two-dimensional image is a first two-dimensional image, and the output data is first output data; the automated visual inspection system further comprises a second line scan camera, a second conveying means, and a second spinning means; the second conveying means is for, while the first conveying means orients the first vessel, orienting a second vessel such that the second line scan camera has a profile side view of an edge of a stopper of the second vessel; the second spinning means is for spinning the second vessel while the first spinning means spins the first vessel; and the processing means is further for causing the second line scan camera to capture a second plurality of images of the edge of the stopper of the second vessel while the first line scan camera captures the first plurality of images, generating, based on at least the second plurality of images, a second two-dimensional image of the edge of the stopper of the second vessel, and processing, by executing the inference model, pixels of the second two-dimensional image to generate second output data indicative of whether the second liquid sample is acceptable.
- 14 . An automated visual inspection system comprising: a line scan camera; sample positioning hardware configured to orient a vessel containing a liquid sample such that the line scan camera has a side view of an edge of a stopper of the vessel, and to spin the vessel while so oriented; and a memory storing instructions that, when executed by one or more processors, cause the one or more processors to cause the line scan camera to capture, while the vessel is spinning, a plurality of images of the edge of the stopper, wherein each image of the plurality of images is a side view image and corresponds to a different rotational position of the vessel, wherein the line scan camera is angled upward relative to the horizontal plane to match or approximate a slope of the stopper, generate, based on at least the plurality of images, a two-dimensional image of the edge of the stopper of the vessel, and process, by executing an inference model that includes a trained neural network, pixels of the two-dimensional image to generate output data indicative of whether the liquid sample is acceptable, wherein the output data is indicative of whether the liquid sample includes one or more objects of a particular type or types, and wherein the trained neural network is configured to discriminate between gas-filled bubbles and particles in the liquid sample.
- 15 . The automated visual inspection system of claim 14 , wherein the instructions cause the one or more processors to process the pixels of the two-dimensional image at least by applying intensity values associated with different pixels, or other values derived from the intensity values, to different nodes of an input layer of the trained neural network.
- 16 . The automated visual inspection system of claim 14 , wherein the vessel is a syringe, the stopper is a plunger, and the edge of the stopper is an edge of a plunger dome that contacts the liquid sample.
- 17 . The automated visual inspection system of claim 14 , wherein one or both of: (i) the sample positioning hardware includes a motorized rotary table or starwheel, and orients the vessel at least by conveying the vessel using the motorized rotary table or starwheel; and (ii) the sample positioning hardware inverts the vessel such that the stopper is beneath the liquid sample.
Description
FIELD OF DISCLOSURE The present application relates generally to automated visual inspection (AVI) systems for pharmaceutical or other products, and more specifically to techniques for detecting and distinguishing particles and other objects (e.g., bubbles) in vessels filled with samples (e.g., solutions). BACKGROUND In certain contexts, such as quality control procedures for manufactured drug products, it is necessary to examine samples (e.g., vessels/containers such as syringes or vials, and/or their contents such as fluid or lyophilized drug products) for defects. The acceptability of a particular sample, under the applicable quality standards, may depend on metrics such as the type and/or size of container defects (e.g., chips or cracks), or the type, number and/or size of undesired particles within a drug product (e.g., fibers), for example. If a sample has unacceptable metrics, it may be rejected and/or discarded. To handle the quantities typically associated with commercial production of pharmaceuticals, the defect inspection task has increasingly become automated. However, automated detection of particulates in solution presents a special challenge within the pharmaceutical industry. High detection accuracy is generally difficult to achieve, and becomes even more difficult as higher viscosity solutions inhibit particle motion, which can otherwise be indicative of the particle type. For protein-based products with formulations that release gases that promote the formation of bubbles, conventional particle detection techniques can result in a particularly high rate of false rejects. For example, such techniques may have difficulty distinguishing these bubbles (which may cling to the vessel) from heavy particles that tend to settle/rest against a portion of the vessel (e.g., against a plunger of a syringe filled with a solution). Moreover, the specialized equipment used to assist in automated defect inspection has become very large, very complex, and very expensive. A single piece of commercial line equipment may include numerous different AVI stations that each handle different, specific inspection tasks. As just one example, the Bosch® Automatic Inspection Machine (AIM) 5023 commercial line equipment, which is used for the fill-finish inspection stage of drug-filled syringes, includes 14 separate visual inspection stations, with 16 general inspection tasks and numerous cameras and other sensors. As a whole, such equipment may be designed to detect a broad range of defects, including container integrity defects such as large cracks or container closures, cosmetic container defects such as scratches or stains on the container surface, and defects associated with the drug product itself such as liquid color or the presence of foreign particles. Due to the above-noted challenges associated with particle detection and characterization, however, such equipment can require redundancies between AVI stations. In the case of the Bosch® AIM 5023 line equipment, for example, the relatively poor performance of a “stopper edge” inspection station (for detecting and distinguishing heavy particles resting on the dome of a syringe plunger) necessitates that particle inspection also be performed at another, “stopper top” AVI station with additional cameras, in order to achieve acceptable overall levels of particle inspection accuracy. This increases the complexity and cost of the equipment, and/or requires that the “stopper top” AVI station be adapted to perform multiple inspection tasks rather than being optimized for a single task (e.g., detecting defects in the stopper itself). SUMMARY Embodiments described herein relate to systems and methods in which deep learning is applied to a particular type of AVI station (e.g., within commercial line equipment that may include multiple AVI stations) to synergistically provide substantial improvements to accuracy (e.g., far fewer false rejects and/or false positives). Additionally or alternatively, the described systems and methods may allow advantageous modifications to other AVI stations (e.g., within the same commercial line equipment), such as by allowing other AVI stations to focus exclusively on other tasks, and/or by eliminating other AVI stations entirely. In particular, deep learning is applied to an AVI station that utilizes one or more line scan cameras (e.g., CMOS line scan camera(s)) to detect and distinguish objects (e.g., gas-filled bubbles versus glass and/or other particles) that are resting or otherwise positioned on or near an edge of a stopper of a vessel containing a sample (e.g., a liquid solution drug product). For example, the AVI station may utilize the line scan camera(s) to detect and distinguish objects that are positioned on or near the surface of a syringe plunger dome in contact with a liquid sample within the syringe. The line scan camera(s) may capture multiple line images as the AVI station rotates/spins the vessel at least one revolution (3