Search

US-12617574-B2 - Container classification system and/or method

US12617574B2US 12617574 B2US12617574 B2US 12617574B2US-12617574-B2

Abstract

The method can include: determining image data; identifying a container using an object detector; and classifying a container state. The method can optionally include performing an action based on the container state classification. However, the method can additionally or alternatively include any other suitable elements. The method functions to detect and classify the state of bowls (i.e., complete/incomplete) to facilitate ingredient insertion into (unserved, ‘incomplete’) bowls along an assembly line.

Inventors

  • John Unkovic
  • Rajat Bhageria
  • Norberto A. Goussies
  • CLEMENT CREUSOT
  • Somudro Gupta
  • Tom Achache
  • Luis Rayas
  • Vinny Senthil

Assignees

  • Chef Robotics, Inc.

Dates

Publication Date
20260505
Application Date
20250218

Claims (20)

  1. 1 . A method for pairwise control of a first foodstuff assembly robot and a second foodstuff assembly robot along a conveyor line, the method comprising: based on a first image of a first robot workspace of the first foodstuff assembly robot, determining a plurality of container tracks for a plurality of containers within the first robot workspace; based on the plurality of container tracks, controlling the first foodstuff assembly robot to perform a first foodstuff insertion operation into a first subset of the plurality of containers; subsequently, based on a second image of a second robot workspace of the second foodstuff assembly robot, identifying a remainder of the plurality of containers within the second robot workspace, wherein the first robot workspace and the second robot workspace are non-overlapping, wherein the remainder of the plurality of containers are identified based on a relative spacing between the plurality of containers along the conveyor, wherein the relative spacing comprises an arrangement of the plurality of containers; and controlling the second foodstuff assembly robot to perform a second foodstuff insertion operation into the remainder of the plurality of containers.
  2. 2 . The method of claim 1 , wherein the remainder of the plurality of containers are identified based on the plurality of container tracks.
  3. 3 . The method of claim 2 , wherein the remainder of the plurality of containers are identified by a computing system of the second foodstuff assembly robot, wherein the method further comprises: prior to capturing the second image, receiving the plurality of tracks at the computing system of the second foodstuff assembly robot.
  4. 4 . The method of claim 3 , wherein the first and second foodstuff assembly robots are communicatively coupled via a wireless mesh network.
  5. 5 . The method of claim 1 , further comprising, at the second foodstuff assembly robot: classifying each container of the plurality of containers based on the foodstuff insertion performed by the first foodstuff assembly robot, wherein the remainder of the plurality of containers is identified based on the respective classification of each container; and tracking the remainder of the plurality of containers within the second robot workspace.
  6. 6 . The method of claim 5 , wherein each container is classified using an extrinsic reference.
  7. 7 . The method of claim 6 , wherein the extrinsic reference is a set of indicators along the conveyor line.
  8. 8 . The method of claim 5 , wherein each container is classified using an intrinsic reference associated with the container, wherein the intrinsic reference is independent of foodstuff within the container.
  9. 9 . The method of claim 5 , wherein each container is classified using an image comparison.
  10. 10 . The method of claim 5 , further comprising: at the first foodstuff assembly robot, estimating a speed of the conveyor line, wherein the first subset of the plurality of containers are selected according to a first predetermined insertion strategy based on the speed of the conveyor line.
  11. 11 . The method of claim 10 , wherein the second foodstuff assembly robot is controlled to perform foodstuff insertion into the remainder of the plurality of containers according to a second predetermined insertion strategy, wherein the second predetermined insertion strategy is different from the first predetermined insertion strategy.
  12. 12 . The method of claim 1 , wherein the first subset of the plurality of containers are selected according to a first predetermined insertion strategy, wherein a speed of the conveyor line is controlled based on the first predetermined insertion strategy.
  13. 13 . The method of claim 1 , further comprising: at the second foodstuff assembly robot, calibrating an offset distance between first and second robot workspaces based on a relative spacing between the plurality of containers.
  14. 14 . The method of claim 13 , wherein the offset distance is determined by Simultaneous Localization and Mapping (SLAM) of container motion along the conveyor relative to the second robot workspace.
  15. 15 . The method of claim 13 , wherein the offset distance is calibrated responsive to a manual pairing of the first and second foodstuff assembly robots.
  16. 16 . The method of claim 13 , wherein the offset distance is less than twice the dimension of the first robot workspace along the length of the conveyor line.
  17. 17 . The method of claim 1 , further comprising: dynamically controlling the conveyor line based on the foodstuff insertion performed at the first foodstuff assembly robot.
  18. 18 . The method of claim 1 , wherein the first foodstuff assembly robot is controlled in a pairwise arrangement based on a third foodstuff assembly robot.
  19. 19 . The method of claim 1 , wherein identifying a remainder of the plurality of containers within the second robot workspace comprises dynamically identifying the remainder of the plurality of containers based on the second image.
  20. 20 . The method of claim 1 , wherein the first foodstuff insertion operation and second foodstuff insertion operation both insert identical foodstuff ingredients.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 63/639,454, filed 26-APR-2024, and U.S. Provisional Application No. 63/554,034, filed 15 Feb. 2024, each of which is incorporated herein in its entirety by this reference. This application is related to U.S. application Ser. No. 18/075,961, filed 6 Dec. 2022, which is incorporated herein in its entirety by this reference. TECHNICAL FIELD This invention relates generally to the robotic automation field, and more specifically to a new and useful classification system and/or method in the robotic automation field. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 is a schematic representation of a variant of the system. FIG. 2 is a flowchart diagram representation of a variant of the method. FIG. 3 is a schematic representation of a variant of the system. FIG. 4 is a schematic representation of a variant of the system. FIG. 5 is a flowchart diagram representation of a variant of the method and/or system. FIG. 6 is an example illustration of container classification in a variant of the method. FIGS. 7A-7B are schematic representations of a first and a second variant of the system, respectively. FIG. 8 is a flowchart diagram representation of a variant of the method. FIG. 9 is an illustrative example of imaging data transformation in a variant of the method. FIGS. 10A-10C are examples of clusters in a (reduced order) feature space of image embeddings generated by a similarity model in one or more variants of the method. FIG. 11 is an example schematic illustration of a striped belt in a variant of the system and/or method. FIG. 12 is a flowchart diagram representation of a variant of the method. FIG. 13 is a schematic representation of a variant of the system and/or method. DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview The method S100, an example of which is shown in FIG. 2, can include: determining image data S110; identifying a container using an object detector S120; and classifying a container state S130. The method can optionally include performing an action based on the container state classification S140. However, the method S100 can additionally or alternatively include any other suitable elements. The method functions to detect and classify the state of bowls (i.e., complete/incomplete) to facilitate ingredient insertion into (unserved, ‘incomplete’) bowls along an assembly line. Examples of the method S100 are shown in FIGS. 12 and 13. Additionally, container classification can include or be used to facilitate dynamic insertion scheduling and/or real time cooperation of multiple, independent robots operating along an assembly line, such as in conjunction with the system(s) and/or method elements as described in U.S. application Ser. No. 18/075,961, filed 6 Dec. 2022,which is incorporated herein in its entirety by this reference. Variants of the method can additionally include and/or operate in conjunction with the any of the classification and/or labeling method elements as described in U.S. application Ser. No. 18/379,127, filed 11 Oct. 2023, which is incorporated herein in its entirety by this reference. Variants can be used can be used in conjunction with any of the method elements and/or processes as described in U.S. application Ser. No. 18/499,092, filed 31 Oct. 2023, which is incorporated herein in its entirety by this reference. The term “bowl” as utilized herein can additionally or alternatively refer to containers (e.g., food container), trays (e.g., food trays, microwave trays, etc.), base food items (e.g., tortilla, bread, dough, etc.; such as for burritos, wraps, sandwiches, pizzas, etc.), fruit cups, party trays, bins, and/or any other suitable bowls or other object(s), such as objects in a (conveyor) line assembly context. For instance, the terms “bowl detector” (and/or “bowl detection model”) can likewise reference a container detector, container detection model, and/or any other suitable object detection model(s). Similarly, the term “bowl classifier” (and/or “bowl classification model”) can likewise reference a container classifier, container classification model, and/or any other suitable object classification model(s). However, the term “bowl” can be otherwise suitably referenced herein. Additionally, it is understood that, in some variants, bowl detection/classification approaches herein may be generalized to any other suitable object detection problems and/or assembly contexts. For instance, variants can additionally or alternatively be used for detection and classification of the self-contained foods (i.e., food-based containers/bowls), such as bread-bowls, wraps, burritos, pizzas, and/or any other suitable self-contained food assemblies, and/or in an