Search

KR-102962942-B1 - System and method for 3D scanning of a moving object longer than the field of view

KR102962942B1KR 102962942 B1KR102962942 B1KR 102962942B1KR-102962942-B1

Abstract

The present invention provides a system and method for using an area scan sensor of a vision system in conjunction with an encoder or other knowledge of motion to capture accurate measurements of an object longer than a single field of view (FOV) of the sensor. This provides a simple method for processing the full range of an object for dimensional measurement purposes by identifying features/edges of an object tracked from image to image. The logic automatically determines whether the object is longer than the FOV and causes a series of image acquisition snapshots to occur while the moving/transported object remains within the FOV until the object is no longer within the FOV. At this point, acquisition is stopped and individual images are combined into segments of the full image. These images can be processed to derive the full dimensions of the object based on input application details.

Inventors

  • 캐리, 벤 알.
  • 패럿, 앤드류
  • 리우, 유강
  • 치앙, 길버트

Assignees

  • 코그넥스코오포레이션

Dates

Publication Date
20260508
Application Date
20210218
Priority Date
20200218

Claims (20)

  1. As a vision system having a 3D camera assembly, It is arranged with area scan sensors, and a vision system processor receives 3D data from images of objects acquired within the field of view (FOV) of a 3D camera assembly, the objects are transported in the transport direction through the FOV, the objects define the total length between opposing edges of the objects in the transport direction that is longer than the FOV, and the FOV defines the available region of interest (ROI). Vision systems are: A dimension measurement processor that measures the total length based on motion tracking information derived from the movement of an object through a FOV, combined with a plurality of 3D images of an object acquired sequentially by a 3D camera assembly with a predetermined amount of movement motion between a plurality of 3D images; and Includes an FOV-associated presence detector that provides a presence signal as an object is located adjacent to the FOV; In response to a presence signal, the dimension measurement processor is arranged to determine whether the object appears in more than one image when the object moves in the transport direction, and In response to the identified feature information of the object, the dimension measurement processor is arranged to determine whether the object is longer than the FOV when the object moves in the transport direction, and The dimension measurement processor is arranged to determine the length LR of the available ROI based on motion tracking information, and The length LR of the available ROI is greater than a predetermined amount of transport motion between 3D images, and the dimension measurement processor is arranged to determine the overlap of 3D images so that opposing edges are included in the overlapped 3D images, and The above vision system processor combines identified feature information of an object from successive image acquisitions by a 3D camera assembly that generates collective feature data to determine the total dimensions of the object in a manner that does not combine separate individual images into a whole image, Vision System.
  2. In paragraph 1, The vision system processor is arranged to acquire a series of image acquisition snapshots while the object remains within the FOV and until the object moves out of the FOV in response to the object's total length being longer than the FOV. Vision System.
  3. In paragraph 2, The above vision system processor is arranged to derive the total attributes of an object using collective feature data based on input application data, wherein the total attributes include at least one of object classification, object dimensions, distortion, and object volume. Vision System.
  4. In paragraph 3, Based on the above-mentioned overall attributes, the object processing process further comprises performing an operation on an object including at least one of redirecting the object, removing the object, issuing a warning, and correcting distortion of the object. Vision System.
  5. In any one of paragraphs 1 through 4, The above object is transported by a conveyor or manual operation, Vision System.
  6. In paragraph 5, The above motion tracking information is generated by an encoder operably connected to a conveyor, Vision System.
  7. In any one of paragraphs 1 through 4, The above existence signal is used by a dimension measurement processor to determine the continuity of an object between each of multiple 3D images when the object moves in the transport direction, Vision System.
  8. In Paragraph 7, Multiple 3D images are acquired by a 3D camera assembly having a predetermined overlap between them, and further include a removal process that uses motion tracking information to remove the overlap section from the object dimensions to determine the total length, Vision System.
  9. In paragraph 8, The image removal process further includes removing the last image among the plurality of 3D images obtained as a result in which the previous image among the plurality of 3D images includes the rear edge of an object and an existence signal is expressed. Vision System.
  10. In any one of paragraphs 1 through 4, The above-described dimension measurement processor is arranged to use identified feature information of an object to determine the continuity of the object between each of a plurality of 3D images as the object moves in the transport direction, Vision System.
  11. In any one of paragraphs 1 through 4, The dimension measurement system defines the minimum spacing between objects in multiple 3D images, wherein the multiple objects below are considered as a single object with missing 3D image data, Vision System.
  12. In any one of paragraphs 1 through 4, The above vision system processor is, (a) Limit data out of length, (b) Limit data outside the width, (c) Limit data exceeding the height, (d) Limit data out of volume, (e) Liquid volume, (f) Classification (g) Amount of actual imaged data of an object versus expected imaged data of an object (QTY), (h) positional features of an object, or (i) arranged to generate collective feature data for an object, including the detection of damage to the object, Vision System.
  13. A method for measuring the dimensions of an object using a vision system having a 3D camera assembly arranged with area scan sensors, The vision system processor receives 3D data from an image of an object acquired within the field of view (FOV) of a 3D camera assembly, the object is transported in the transport direction through the FOV, the object defines the total length between opposing edges of the object in the transport direction that is longer than the FOV, and the FOV defines the available region of interest (ROI). The method is: A step of measuring the total length based on motion tracking information derived from the transfer of an object through the FOV by combining multiple 3D images of an object acquired sequentially by a 3D camera assembly with a predetermined amount of transport motion between multiple 3D images; A step of generating a presence signal as an object is positioned adjacent to the FOV, and in response to the presence signal, determining whether the object appears in more than one image when the object moves in the transport direction; A step of determining whether the object is longer than the FOV when the object moves in the transport direction in response to the object identified feature information; A step of determining the length LR of an available ROI based on motion tracking information — the length LR of an available ROI is greater than a predetermined amount of carrying motion between 3D images, and includes a step of determining the overlap of 3D images such that opposing edges are included in the overlapped 3D image —; and A step comprising combining identified feature information of an object from successive image acquisitions by a 3D camera assembly to generate collective feature data in a manner that does not combine separate individual images into a whole image, A method for measuring the dimensions of an object using a vision system.
  14. In Paragraph 13, The method further comprises the step of acquiring a sequence of image acquisition snapshots while the object remains within the FOV and until the object moves out of the FOV, in response to the total length of the object being longer than the FOV. A method for measuring the dimensions of an object using a vision system.
  15. In Paragraph 14, The method further includes a step of deriving the total attributes of an object based on input application data using the above collective feature data, wherein the total attributes include at least one of object classification, object dimensions, distortion, and object volume. A method for measuring the dimensions of an object using a vision system.
  16. In paragraph 15, A method further comprising the step of performing an operation on an object, including at least one of redirecting the object, removing the object, issuing an alarm, and correcting distortion of the object based on the above-mentioned overall attributes. A method for measuring the dimensions of an object using a vision system.
  17. In any one of paragraphs 13 through 16, A method further comprising the step of using an existence signal to determine the continuity of an object between each of multiple 3D images when the object moves in a transport direction. A method for measuring the dimensions of an object using a vision system.
  18. In any one of paragraphs 13 through 16, The step of combining the above information is: (a) Limit data out of length, (b) Limit data outside the width, (c) Limit data exceeding the height, (d) Limit data out of volume, (e) Liquid volume, (f) Classification (g) Amount of actual imaged data of an object versus expected imaged data of an object (QTY), (h) positional features of an object, or (i) comprising the step of generating a detection of damage to an object, A method for measuring the dimensions of an object using a vision system.
  19. In any one of paragraphs 13 through 16, The method further comprises the step of defining a minimum spacing between objects in a plurality of 3D images and the step of considering the plurality of objects below the minimum spacing as a single object with missing 3D image data. A method for measuring the dimensions of an object using a vision system.
  20. As a vision system having a 3D camera assembly, It is arranged with area scan sensors, and a vision system processor receives 3D data from an image of an object acquired within the field of view (FOV) of a 3D camera assembly, the object is transported in the transport direction through the FOV, and the object defines a total length in the transport direction that is longer than the FOV, and Vision systems are: A dimension measurement processor that measures the total length based on motion tracking information derived from the movement of an object through a FOV, combined with a plurality of 3D images of an object acquired sequentially by a 3D camera assembly with a predetermined amount of movement motion between a plurality of 3D images; and Includes an FOV-associated presence detector that provides a presence signal as an object is located adjacent to the FOV; In response to a presence signal, the dimension measurement processor is arranged to determine whether the object appears in more than one image when the object moves in the transport direction, and In response to the identified feature information of the object, the dimension measurement processor is arranged to determine whether the object is longer than the FOV when the object moves in the transport direction, and The above presence signal is used by a dimension measurement processor to determine the continuity of the object between each of the multiple 3D images when the object moves in the transport direction, and The plurality of 3D images are acquired by a 3D camera assembly having a predetermined overlap between them, and further include a removal process that uses motion tracking information to remove the overlap section from the object dimensions to determine the total length, The method further includes an image removal process that removes the last image among a plurality of 3D images obtained as a result in which the previous image among the plurality of 3D images includes the rear edge of an object and an existence signal is expressed, and The above vision system processor combines identified feature information of an object from successive image acquisitions by a 3D camera assembly that generates collective feature data to determine the total dimensions of the object in a manner that does not combine separate individual images into a whole image, Vision System.

Description

System and method for 3D scanning of a moving object longer than the field of view The present invention relates to a machine vision system for analyzing objects in three-dimensional space, and more specifically, to a system and method for analyzing objects transmitted through an inspection area on a conveyor. Machine vision systems (also referred to herein as “vision systems”) that perform the measurement, inspection, alignment, and/or decoding of objects (e.g., barcodes—also called “ID codes”) are used in a wide range of applications and industries. These systems are based on the use of image sensors, which acquire images of targets or objects (typically grayscale or color and one-dimensional, two-dimensional, or three-dimensional) and process the acquired images using onboard or interconnected vision system processors. The processor typically includes both processing hardware and non-transient computer-readable program instructions that perform one or more vision system processes to produce a desired output based on the processed information from the image. This image information is typically provided within an array of image pixels, each having various colors and/or intensities. As described above, one or more vision system cameras may be arranged to acquire two-dimensional (2D) or three-dimensional (3D) images of objects in an imaged scene. A 2D image is typically characterized by pixels having x and y components within a whole N × M image array (often defined as the pixel array of a camera image sensor). When an image is acquired in 3D, there is a height or z-axis component in addition to the x and y components. 3D image data can be acquired using various mechanisms/techniques, including stereoscopic camera triangulation, LiDAR, time-of-flight sensors, and (e.g.) laser displacement profiling. Generally, 3D cameras are arranged to capture 3D image information for objects within a field of view (FOV) that constitutes a volume space fanning outward along the horizontal x and y dimensions as a function of distance from the orthogonal z-dimensional camera sensor. A sensor that simultaneously acquires an image of the entire volume space (i.e., in a “snapshot”) is called an “area scan sensor.” Such area scan sensors are distinguished from line scan sensors (e.g., profilers) that capture 3D information piece by piece and use motion (e.g., conveyor movement) and measurement of this motion (e.g., via a motion encoder or stepper) to move an object through the inspection area/FOV. The advantage of line scan sensors is that the object under inspection can be arbitrarily elongated, and the object length is taken along the direction of conveyor motion. Conversely, area scan sensors, which take image snapshots of volume space, do not require an encoder to capture the 3D scene, but if the object is longer than the field of view, the entire object cannot be captured in a single snapshot. If only a portion of the object is acquired in a single snapshot, additional snapshots (or multiple snapshots) of the remaining length must be acquired when the rear part of the object (which has not yet been imaged) is passed into the FOV. In the case of multiple snapshots, the challenge is to register (link together) the multiple 3D images in an efficient manner so that the entire 3D image accurately represents the features of the object. The following description of the invention refers to the attached drawings: Figure 1 is an overview of a system for acquiring and processing 3D images of an object longer than the field of view using an area scan sensor, which uses a dimension measurement process (processor) to generate a full image from multiple snapshots as the object moves through the field of view. FIG. 2 is a diagram illustrated in a side view of FIG. 1 of a conveyor and area scan sensor array, which details the available FOV for 3D imaging of an object trigger plane for image acquisition according to an exemplary embodiment. FIG. 3 is a diagram illustrating the runtime behavior of the arrays of FIG. 1 and 2, wherein an exemplary object has a length in the carrying direction longer than the available ROI and reaches the trigger plane to trigger an initial snapshot. FIG. 4 is a diagram illustrating the runtime operation of FIG. 3, where an exemplary object partially moves out of the ROI and shows a degree of motion sufficient for the encoder to trigger a second snapshot. FIG. 5 is a diagram illustrating the runtime behavior of the arrays of FIG. 1 and 2, and the exemplary object represents a complex upper surface representing two separate objects in the FOV, which can result in loss of 3D image data. FIG. 6 is a flowchart illustrating a procedure for operating a dimension measurement process according to an exemplary embodiment, including cases where multiple objects and/or complex objects are imaged. FIG. 7 is a drawing showing a plan view of an imaged conveyor surface having an exemplary overlapping image segment representing