Search

CN-121986357-A - Techniques for machine vision estimation of food container fill and nutritional content

CN121986357ACN 121986357 ACN121986357 ACN 121986357ACN-121986357-A

Abstract

Techniques are disclosed for techniques to estimate a fill amount of a subject food container. The image may capture at least a portion of the food container and any food in the food content space of the food container. The image may be segmented to provide a first food parameter and a first aspect parameter corresponding to the image. The first aspect parameter may be input into a machine learning model. A first angular reference corresponding to the image may be determined from the machine learning model based on the first aspect parameter. The first food parameter and the first angle reference may be input into a machine learning model. The estimated weight reference may be determined from the machine learning model based on the first food parameter and the first angle reference. The estimated weight reference may correspond to a fill amount of the food container.

Inventors

  • Susan venemont
  • Jordi
  • Thompson, Robin A.
  • Stephen G. Matthews

Assignees

  • 希尔氏宠物营养品公司

Dates

Publication Date
20260505
Application Date
20241009
Priority Date
20231010

Claims (20)

  1. 1. A computer-implemented method of estimating a fill volume of a food container of a subject, the method comprising: (a) Receiving at least a first image for a first food container, the at least first image capturing at least a portion of the food container and any food in a food content space of the food container; (b) Performing a segmentation on the first image, the segmentation providing a first food parameter and a first aspect parameter corresponding to the first image; (c) Inputting the first aspect parameters into a machine learning model; (d) Determining a first angular reference corresponding to the first image from the machine learning model based at least in part on the first aspect parameter; (e) Inputting the first food parameter and the first angle reference into the machine learning model, and (F) Determining at least one of an estimated weight reference or an estimated volume reference corresponding to the fill amount of the food container from the machine learning model based at least in part on the first food parameter and the first angle reference, Wherein steps (a) - (f) are performed at least in part by one or more processors.
  2. 2. The method of claim 1, further comprising: (g) At least one of a minimum estimated weight reference corresponding to at least a substantially full food container, a time period corresponding to when the food container is at least substantially full, a time period corresponding to when the food container is substantially empty, or a time rate of subject feeding is determined based at least in part on at least one of the estimated weight reference or the estimated volume reference.
  3. 3. The method of claim 1 or claim 2, wherein the dividing comprises at least food container dividing and food dividing.
  4. 4. The method of any of the preceding claims, wherein the first image is provided by at least one of a digital camera, a depth camera, an infrared camera, or a machine vision camera.
  5. 5. The method of any of the preceding claims, wherein the segmentation results in a plurality of first pixels corresponding to the food container corresponding to the first image and a plurality of second pixels corresponding to food in a food content space of the food container corresponding to the first image.
  6. 6. The method of claim 5, wherein the first food parameter is a ratio of the number of second pixels to a sum of the number of first pixels and the number of second pixels.
  7. 7. The method of any of the preceding claims, wherein the segmentation produces a geometric description comprising a rotational bounding box width and a rotational bounding box height corresponding to the first image.
  8. 8. The method of claim 7, wherein the first aspect parameter is a ratio of the rotational bounding box width to the rotational bounding box height.
  9. 9. The method of any of the preceding claims, wherein the segmenting uses one or more morphometric ground line active contours (MACG).
  10. 10. The method of any of the preceding claims, further comprising: A nutritional content of the food corresponding to the estimated weight reference and/or the estimated volume reference is determined based at least in part on the estimated weight reference and/or the estimated volume reference.
  11. 11. The method of any of the preceding claims, wherein the subject is at least one of a canine or a feline, and wherein the food container is a pet food bowl.
  12. 12. A computer-implemented method of capturing machine learning training data to estimate a fill amount of a food container of a subject, the method comprising: (a) Inputting a first type of food container into at least one processing device; (b) Inputting one or more colors of the food container into the at least one processing device; (c) Inputting a first type of floor onto which the food container is disposed into the at least one processing device; (d) Inputting a first type of food into the at least one processing device; (e) Inputting a first predetermined filling amount of the food disposed in a food content space of the food container into the at least one processing device; (f) Placing a camera device in a first position relative to the food container in a vertical plane transverse to at least a portion of the food container, the camera device being disposed in the first position at a first angle relative to the food container; (g) Capturing a first image of the food container from the first location; (h) Repeating elements (f) through (g) for placing the camera device in at least one of a second position, a third position, a fourth position, and a fifth position, the camera device arranged to capture at least one of a second image at a second angle, a third image at a third angle, a fourth image at a fourth angle, and a fifth image at a fifth angle; (i) Repeating elements (e) through (h) for at least a second predetermined fill amount and a third predetermined fill amount; (j) Storing the first image, the second image, the third image, the fourth image, and the fifth image in a machine learning training database, and (K) Training at least one machine learning model using at least the first image, the second image, the third image, the fourth image, and the fifth image, Wherein at least elements (a) through (e) and (g) through (k) are at least partially executed by one or more processors of the at least one processing device.
  13. 13. The method of claim 12, wherein the type of food container comprises at least one of a shape of the food container, a size of the food container, or a material of the food container.
  14. 14. The method of claim 12 or 13, wherein the first type of floor comprises at least one of a bare floor, a tile floor, or a carpet floor.
  15. 15. The method of any one of claims 12 to 14, wherein the first type of food comprises at least one of a chunk, a granule, a mash, a wet or a dry.
  16. 16. The method of any of claims 12 to 15, wherein the camera device is at least one of a digital camera or a machine vision camera.
  17. 17. The method of any one of claims 12 to 16, wherein at element (i), elements (e) to (h) are repeated for at least the second, third, fourth, and fifth predetermined fill amounts.
  18. 18. The method of any of claims 12 to 17, further comprising storing the first image, the second image, the third image, the fourth image, and the fifth image with an indication of the first type of food container, the first type of floor, and the first type of food.
  19. 19. The method of any of claims 12-18, wherein the first angle is defined by an angular displacement of the first position relative to a horizontal plane intersecting at least a portion of the food container, the second angle is defined by an angular displacement of the second position relative to the horizontal plane, the third angle is defined by an angular displacement of the third position relative to the horizontal plane, the fourth angle is defined by an angular displacement of the fourth position relative to the horizontal plane, and the fifth angle is defined by an angular displacement of the fifth position relative to the horizontal plane.
  20. 20. The method of claim 19, wherein the first angle is about 90 degrees, the second angle is about 65 degrees, the third angle is about 45 degrees, the fourth angle is about 25 degrees, and the fifth angle is about 10 degrees.

Description

Techniques for machine vision estimation of food container fill and nutritional content Cross Reference to Related Applications The present application claims the benefit of U.S. provisional patent application No. 63/589,184, filed 10/2023, the entire disclosure of which is incorporated herein by reference for all purposes. Background A subject (such as an animal) may draw nutrients (such as food and/or water) from various dispensers and/or food containers (such as pet food and water bowls). The pet owners may fill such pet bowls to various levels and/or at various times based on the specific needs and/or habits of the pet. The pet may consume food from the pet food bowl because the pet needs nutrition and/or because the pet owner meters the pet's food, such as for a pet that is on a certain type of diet. For various reasons, pets may be fed/may consume the same food or different kinds of food. For example, for health reasons, a pet may be fed specially tailored foods. Or pet owners may believe that over time their pets may prefer one food over another. For these reasons, as well as others, pets may be fed wet and/or dry foods. And the pet may be fed with food of the lump, pellet and/or mashed type. Disclosure of Invention Techniques are disclosed for one or more computer-implemented techniques/methods (and devices and/or systems implementing these techniques/methods) for estimating a fill amount of a subject food container. The techniques may include receiving at least a first image for a first food container, the at least first image capturing at least a portion of the food container and any food in a food content space of the food container. Segmentation may be performed on the first image. The segmentation may provide a first food parameter and a first aspect parameter (ASPECT PARAMETER) corresponding to the first image. Techniques may include inputting a first aspect parameter into a machine learning model (e.g., and/or algorithm, equation, and/or model). The techniques may include determining a first angle reference corresponding to the first image from a machine learning model based at least in part on the first aspect parameter. The first food parameter and the first angle reference may be input into a machine learning model. The techniques may include determining, from a machine learning model, an estimated weight reference and/or an estimated volume reference corresponding to a fill amount of a food container based at least in part on a first food parameter and a first angle reference. In one or more contexts, the techniques may include determining a minimum estimated weight reference and/or a minimum estimated volume reference corresponding to at least a substantially full food container, a time period corresponding to when the food container is at least substantially full, a time period corresponding to when the food container is substantially empty, and/or a time rate of subject feeding, a frequency of subject feeding and/or timing (e.g., different from the time rate) based at least in part on the estimated weight reference and/or the estimated volume reference. For example, the subject may eat more in the morning and/or at a particular time (such as 8 am, 10 am, etc.), and may eat less in the afternoon and/or at a particular time (such as 2 pm, 5 pm, etc.). In one or more contexts, the segmentation may include at least food container segmentation and food segmentation. In one or more contexts, the first image may be provided by a digital camera and/or a machine vision camera. In one or more scenarios, the segmentation may generate a plurality of first pixels corresponding to the food container corresponding to the first image and/or a plurality of second pixels corresponding to food in the food content space of the food container corresponding to the first image. In one or more contexts, the first food parameter may be a ratio of the number of second pixels to a sum of the number of first pixels and the number of second pixels. In one or more scenarios, the segmentation produces a rotated bounding box width and rotated bounding box height corresponding to the first image. The first aspect parameter may be a ratio of a width of the rotating bounding box and a height of the rotating bounding box. In one or more scenarios, the segmentation may use one or more Morphometric Ground Activity Contours (MGAC), among other algorithms and/or techniques. For example, image processing (e.g., active contours), machine learning, and/or deep learning algorithms/models (e.g., convolutional Neural Networks (CNNs), etc.) may be used. In one or more contexts, the techniques may include determining a nutritional content of food corresponding to the estimated weight reference and/or the estimated volume reference based at least in part on the estimated weight reference and/or the estimated volume reference. In one or more contexts, the subject may be a canine and/or a feline. In one or more contexts, the food conta