Search

US-12626503-B2 - Food waste detection method and system

US12626503B2US 12626503 B2US12626503 B2US 12626503B2US-12626503-B2

Abstract

A system ( 1 ) for detecting food related products ( 2 ) before thrown away, the system comprising: one or more cameras ( 11 ); a display unit ( 12 ); a computing device ( 13 ) that is communicatively connected to the cameras and the display; and a scale ( 3 ) that is communicatively connected to the computing device, the scale holding a trash bin ( 31 ), wherein the cameras obtain an image or a video of the products when the products are within a field of view of the cameras and before the products are in the trash bin, the scale configured to weigh the products in the trash bin, and wherein the computing device obtains information about the products from the obtained image or video by applying an image recognition algorithm, receives the weight from the scale and generates and outputs data on the display unit, the data being based on the information about products and the weight.

Inventors

  • Bart VAN ARNHEM
  • Olaf Egbert VAN DER VEEN

Assignees

  • WASTIQ B.V.

Dates

Publication Date
20260512
Application Date
20191213
Priority Date
20181214

Claims (17)

  1. 1 . A system for detecting food related products before being thrown away, the system comprising: one or more cameras; a display unit; a computing device that is communicatively connected to the one or more cameras and the display unit; a scale that is communicatively connected to the computing device, wherein the scale is configured to hold a trash bin, wherein the one or more cameras are configured to obtain an image or a video of the food related products when the food related products are within a field of view of the one or more cameras and before the food related products are in the trash bin, wherein the scale is configured to obtain weight information of the food related products when the food related products are in the trash bin, and wherein the computing device is configured to: receive the weight information from the scale and to obtain therefrom a weight of the food related products; obtain information about the food related products from the obtained image or video by applying an image recognition algorithm configured to detect multiple ingredients in the image or the video and configured to assign to individual pixels of the image or video a respective ingredient label corresponding to one of the ingredients detected in the image or video, and by computing a ratio between the multiple ingredients detected in the image or video based on the ingredient labels assigned to the individual pixels; assign weight estimates to the respective ingredients detected in the image or video by registering the weight across the multiple ingredients detected in the image or video pursuant to said ratio; and generate and output data on the display unit, wherein the data is based on the information about the food related products, on the weight estimates, and on the weight information.
  2. 2 . The system according to claim 1 , wherein the computing device is communicatively connected to a remote server, and wherein the computing device is configured to: transmit the obtained image or video to the remote server for applying the image recognition algorithm; and receive the information about the food related products from the remote server.
  3. 3 . The system according to claim 2 , wherein the computing device is further configured to store one or more of the information about the food related products, the weight information, the output data, and at time stamp in a data storage of the remote server.
  4. 4 . The system according to claim 1 , wherein the computing device is configured to present one of more questions on the display unit about one or more objects in the obtained image or video in case the image recognition algorithm is unable to identify one or more of the food related products from the image or the video, and wherein the display unit comprises a user interface for receiving user input in response to the one or more questions, the response for use by the image recognition algorithm to improve detection of the one or more objects.
  5. 5 . The system according to claim 1 , wherein the one or more cameras are configured to automatically obtain the image or the video when the food related products are within the field of view at a substantially fixed position for a dynamic minimal amount of time necessary for successful detection.
  6. 6 . The system according to claim 1 , wherein the one or more cameras comprises a stereoscopic imaging camera for obtaining 3D volumetric information about the food related products from the image or the video, and wherein the computing device is further configured to improve the weight estimates assigned to the respective ingredients detected in the image or video based on the 3D volumetric information obtained by the stereoscopic imaging camera.
  7. 7 . The system according to claim 6 , wherein the image recognition algorithm is configured to obtain volumetric information from the 3D volumetric information, wherein the computing device is configured to obtain a weight estimation of the food related products based on the volumetric information, wherein the stereoscopic camera replaces the scale, and wherein the weight estimation is used instead of the weight information.
  8. 8 . The system according to claim 1 , wherein the one or more cameras comprises a hyperspectral imaging camera for obtaining substance information about the food related products from the image or the video.
  9. 9 . The system according to claim 1 , wherein the field of view is located in an area around a line of sight from the one or more cameras in a substantially downwards direction.
  10. 10 . The system according to claim 9 , further comprising a housing, wherein the housing comprises the display unit, wherein the housing accommodates the one or more cameras, wherein the housing comprises an outer surface side that is placed at an angle from a horizontal plane, and wherein the cameras are located within the housing at the outer surface side resulting in the line of sight being vertically angled at the angle, the line of sight being perpendicular to the outer surface side, wherein the angle is in a range of 15 to 45 degrees.
  11. 11 . The system according to claim 10 , wherein the housing further comprises the computing device.
  12. 12 . The system according to claim 10 , wherein the housing further comprises a visual indicator configured to indicate to the user a height at which the food related products are to be held with respect to the one or more cameras for obtaining the image or video before the user disposes the food related product in the trash bin.
  13. 13 . The system according to claim 10 , wherein the housing and the scale are connected by a vertically aligned support structure for fixing the housing at a vertical distance from the scale.
  14. 14 . A housing comprising a display unit, the housing further comprising one or more cameras and a computing device, wherein the housing is configured for use in the system according to claim 10 .
  15. 15 . A method for detecting food related products before being thrown away, the method comprising: obtaining an image or a video of the food related products using one or more cameras when the food related products are within a field of view of the one or more cameras and before the food related products are thrown in a trash bin; obtaining weight information of the food related products using a scale when the food related products are in the trash bin, wherein the scale is configured to hold the trash bin; receiving, at a computing device, the weight information from the scale and obtaining therefrom a weight of the food related products; obtaining, by the computing device, information about the food related products from the obtained image or video by applying an image recognition algorithm configured to detect multiple ingredients in the image or the video and configured to assign to individual pixels of the image or video a respective ingredient label corresponding to one of the ingredients detected in the image or video, and by computing a ratio between the multiple ingredients detected in the image or video based on the ingredient labels assigned to the individual pixels; assigning, by the computing device, weight estimates to the respective ingredients detected in the image or video by registering the weight across the multiple ingredients detected in the image or video pursuant to said ratio; generating and outputting data by the computing device on the display unit, wherein the data is based on the information about the food related products, on the weight estimates, and on the weight information.
  16. 16 . The method according to claim 15 , further comprising: transmitting the obtained image or video from the computing device to the remote server for applying the image recognition algorithm; and receiving the information about the food related products from the remote server in the computing device.
  17. 17 . The method according to claim 15 , further comprising: presenting one of more questions on the display unit about one or more objects in the obtained image or video in case the image recognition algorithm is unable to identify one or more of the food related products from the image or the video; and receiving user input from a user interface of the display unit in response to the one or more questions, the response for use by the image recognition algorithm to improve detection of the one or more objects.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority from Dutch application number 2022213 filed on 14 Dec. 2018, the contents of which are hereby incorporated by reference in their entirety. This application is a national stage entry of international application no. PCT/EP2019/085143 which was published under PCT Article 21(2) in English, the contents of which are hereby incorporated by reference in their entirety. TECHNICAL FIELD The present invention relates to a system and a method for detecting food related products, and to a display unit for use in the system. BACKGROUND ART Venues that work with food are often faced with food waste by having to throw away food that passed expiration date or is left over after consumption or preparation. An example of such venue is a restaurant, where food waste may be generated by customers leaving food on their plates, in the kitchen by having leftovers after preparing diners, or in the inventory by having food passing the expiry date. There is a need to reduce food waste. Insight in the food waste may be used by a restaurant for example to optimize planning, proportioning and inventory management, resulting in a more efficient purchase of food and acting in a more environmentally friendly manner. Other examples of venues that may benefit from insight in food waste are caterers, catering industry, hospitals, healthcare institutions, and generally any venue involved in food preparation. SUMMARY According to an aspect of the invention, a system is proposed for detecting food related products before being thrown away. The system can comprise one or more cameras. The system can further comprise a display unit. The system can further comprise a computing device that is communicatively connected to the one or more cameras and the display unit. The system can further comprise a scale that is communicatively connected to the computing device. The scale can be configured to hold a trash bin. The scale can be separable from the trash bin, e.g. by simply placing any trash bin on the scale. The scale can be integrated in the trash bin. The trash bin may be a recycle bin. The one or more cameras can be configured to obtain an image or a video of the food related products when the food related products are within a field of view of the one or more cameras and before the food related products are in the trash bin. Advantageously, this enables food left-overs to be detected before being intermixed with other food waste in the trash bin. The scale can be configured to obtain weight information of the food related products when the food related products are in the trash bin. The computing device can be configured to obtain information about the food related products from the obtained image or video by applying an image recognition algorithm. This image recognition algorithm can run locally on the computing device or remote on a remote server to which the computing device may be communicatively connected. The computing device can be configured to receive the weight information from the scale. The computing device can be configured to generate and output data on the display unit, wherein the data is based on the information about the food related products and the weight information. The food related products are typically food leftovers but can also include other objects that are to be thrown away such as plastics, paper, napkins, cardboard and (disposable) cutlery. The food related products may include a bin, plate or other tray item on which the disposables are placed, which may be detected together with the disposables and input to the image recognition algorithm to improve the detection of the food left-overs or other disposables. In an embodiment the computing device can be communicatively connected to a remote server. The computing device can be configured to transmit the obtained image or video to the remote server for applying the image recognition algorithm. The computing device can be configured to receive the information about the food related products from the remote server. The remote server can be implemented as a cloud computing server or cloud computing service. In an embodiment the computing device can be further configured to store one or more of the information about the food related products, the weight information, the output data, and at time stamp in a data storage of the remote server. This enables food waste to be analyzed or mapped over time. This also enables recommendations to be generated regarding minimizing the food waste as detected over time. In an embodiment the computing device can be configured to present one of more questions on the display unit about one or more objects in the obtained image or video in case the image recognition algorithm is unable to identify one or more of the food related products from the image or the video. The display unit can comprise a user interface, preferably in the form of a touch screen interface,