Search

US-20260127543-A1 - DATA REDUCTION IN A BAR CODE READING ROBOT SHELF MONITORING SYSTEM

US20260127543A1US 20260127543 A1US20260127543 A1US 20260127543A1US-20260127543-A1

Abstract

A method for inventory monitoring and an autonomous robot for inventory monitoring are provided. The method includes detecting one or more shelf labels from one or more images, where the one or more images are captured by one or more cameras on an autonomous robot. The method further includes obtaining one or more bounding boxes based on the one or more shelf labels detected from the one or more images, where each bounding box encloses one or more facings of a same product, and associating the one or more bounding boxes with the one or more shelf labels detected from the one or more images.

Inventors

  • Sarjoun Skaff
  • Stephen Vincent Williams

Assignees

  • Shanghai Hanshi Information Technology Co., Ltd.

Dates

Publication Date
20260507
Application Date
20251219

Claims (20)

  1. 1 . An inventory monitoring method, comprising: detecting one or more shelf labels from one or more images, wherein the one or more images are captured by one or more cameras on an autonomous robot; obtaining one or more bounding boxes based on the one or more shelf labels detected from the one or more images, wherein each bounding box encloses one or more facings of a same product; and associating the one or more bounding boxes with the one or more shelf labels detected from the one or more images.
  2. 2 . The inventory monitoring method according to claim 1 , wherein associating the one or more bounding boxes with the one or more shelf labels detected from the one or more images comprises: associating the one or more facings of the same product in each bounding box with one shelf label detected from the one or more images.
  3. 3 . The inventory monitoring method according to claim 1 , wherein detecting the one or more shelf labels from the one or more images comprises: creating one or more low resolution images derived from the one or more images; detecting positions of the one or more shelf labels from the one or more low resolution images; creating one or more high resolution images that are a subset of the one or more images and have a higher resolution than the one or more low resolution images; and detecting content of the one or more shelf labels from the one or more high resolution images.
  4. 4 . The inventory monitoring method according to claim 1 , wherein detecting the one or more shelf labels from the one or more images comprises: capturing, by the one or more cameras on the autonomous robot, consecutive images as the autonomous robot moves along an aisle; vertically stitching a set of vertical images captured at a same location along the aisle to obtain a vertically stitched image; horizontally stitching the vertically stitched image with a new consecutively obtained vertically stitched image; repeating the capturing, vertically stitching, and horizontally stitching steps to obtain an image panorama as the autonomous robot continues moving along the aisle; and detecting the one or more shelf labels from the image panorama.
  5. 5 . The inventory monitoring method according to claim 1 , wherein obtaining the one or more bounding boxes based on the one or more shelf labels detected from the one or more images comprises: obtaining, using image template matching, the one or more bounding boxes based on the one or more shelf labels detected from the one or more images.
  6. 6 . The inventory monitoring method according to claim 5 , wherein obtaining, using image template matching, the one or more bounding boxes based on the one or more shelf labels detected from the one or more images comprises: obtaining an image template based on the one or more shelf labels detected from the one or more images; comparing the one or more images with the image template; and in response to a positive match, obtaining the one or more bounding boxes based on a matched section of the one or more images.
  7. 7 . The inventory monitoring method according to claim 1 , wherein associating the one or more bounding boxes with the one or more shelf labels detected from the one or more images comprises: iteratively associating the one or more shelf labels with corresponding facing groups by mapping a left most shelf label with a left most facing group and a rightmost shelf label with a right most facing group, and sequentially working inward until each facing group on a shelf is associated with a shelf label, wherein each facing group comprises one or more self-similar facings.
  8. 8 . The inventory monitoring method according to claim 1 , further comprising: performing an inventory count of the same product based on a total number of radio frequency identification (RFID) tags associated with the same product detected by an RFID reader mounted on the autonomous robot.
  9. 9 . The inventory monitoring method according to claim 1 , further comprising: comparing an image location of the one or more shelf labels with a depth map to recover a corresponding three-dimensional (3D) location of the one or more shelf labels.
  10. 10 . The inventory monitoring method according to claim 1 , wherein a bounding box is defined based on one of followings: a horizontal space on a shelf occupied by the one or more facings of the same product, and a vertical space spanning a distance between a current shelf and a shelf above the current shelf or a top of the same product sensed by a depth sensor; a width and a height of the same product; or one or more training classifiers.
  11. 11 . An autonomous robot for inventory monitoring, comprising: a movable base, configured to move along a shelf lined aisle holding inventory; and one or more cameras, attached to the movable base and configured to capture one or more images from which one or more shelf labels are detected, wherein one or more bounding boxes are obtained based on the one or more shelf labels detected from the one or more images, each bounding box encloses one or more facings of a same product, and the one or more bounding boxes are associated with the one or more shelf labels detected from the one or more images.
  12. 12 . The autonomous robot for inventory monitoring according to claim 11 , further comprising: a data analysis system, configured to receive content of the one or more shelf labels and compressed images of the one or more images captured by the one or more cameras.
  13. 13 . The autonomous robot for inventory monitoring according to claim 11 , wherein the one or more facings in each bounding box of the same product are associated with one shelf label detected from the one or more images.
  14. 14 . The autonomous robot for inventory monitoring according to claim 11 , wherein the one or more shelf labels are detected from the one or more images by: creating one or more low resolution images derived from the one or more images; detecting positions of the one or more shelf labels from the one or more low resolution images; creating one or more high resolution images that are a subset of the one or more images and have a higher resolution than the one or more low resolution images; and detecting content of the one or more shelf labels from the one or more high resolution images.
  15. 15 . The autonomous robot for inventory monitoring according to claim 11 , wherein the one or more shelf labels are detected from the one or more images by: capturing, by the one or more cameras on the autonomous robot, consecutive images as the autonomous robot moves along an aisle; vertically stitching a set of vertical images captured at a same location along the aisle to obtain a vertically stitched image; horizontally stitching the vertically stitched image with a new consecutively obtained vertically stitched image; repeating the capturing, vertically stitching, and horizontally stitching steps to obtain an image panorama as the autonomous robot continues moving along the aisle; and detecting the one or more shelf labels from the image panorama.
  16. 16 . The autonomous robot for inventory monitoring according to claim 11 , wherein the one or more bounding boxes are obtained, using image template matching, based on the one or more shelf labels detected from the one or more images.
  17. 17 . The autonomous robot for inventory monitoring according to claim 16 , wherein an image template is obtained based on the one or more shelf labels detected from the one or more images, the one or more images are compared with the image template, and in response to a positive match, the one or more bounding boxes are obtained based on a matched section of the one or more images.
  18. 18 . The autonomous robot for inventory monitoring according to claim 11 , wherein the one or more shelf labels are iteratively associated with corresponding facing groups by mapping a left most shelf label with a left most facing group and a rightmost shelf label with a right most facing group, and sequentially working inward until each facing group on a shelf is associated with a shelf label, wherein each facing group comprises one or more self-similar facings.
  19. 19 . The autonomous robot for inventory monitoring according to claim 11 , wherein an inventory count of the same product is performed based on a total number of radio frequency identification (RFID) tags associated with the same product detected by an RFID reader mounted on the autonomous robot.
  20. 20 . The autonomous robot for inventory monitoring according to claim 11 , wherein a bounding box is defined based on one of followings: a horizontal space on a shelf occupied by the one or more facings of the same product, and a vertical space spanning a distance between a current shelf and a shelf above the current shelf or a top of the same product sensed by a depth sensor mounted on the autonomous robot; a width and a height of the same product; or one or more training classifiers.

Description

RELATED APPLICATION This application is a continuation application of U.S. application Ser. No. 16/044,178, filed on Jul. 24, 2018, which claims the benefit of U.S. Provisional Application Ser. No. 62/536,793, filed Jul. 25, 2017, the entire content of the applications above is incorporated herein by reference for all purposes. TECHNICAL FIELD The present disclosure relates generally to a multiple camera sensor suite capable of accurately monitoring retail or warehouse product. In certain embodiments, the multiple camera sensor suite can be mounted on an autonomous robot and include onboard processing to provide near real time product tracking. BACKGROUND Retail stores or warehouses can have thousands of distinct products that are often sold, removed, added, or repositioned. Even with frequent restocking schedules, products assumed to be in stock may be out of stock, decreasing both sales and customer satisfaction. Point of sales data can be used to roughly estimate product availability, but does not help with identifying misplaced, stolen, or damaged products, all of which can reduce product availability. However, manually monitoring product inventory and tracking product position is expensive and time consuming. One solution for tracking product inventory relies on planograms (lists or diagrams that show how and where specific products should be placed on shelves or displays) in combination with machine vision technology. Given a planogram, machine vision can be used to assist in shelf space compliance. For example, large numbers of fixed position cameras can be used throughout a store to monitor aisles, with large gaps in shelf space being checkable against the planogram or shelf labels, and flagged as “out of stock” if necessary. Alternatively, a smaller number of movable cameras can be used to scan a store aisle. Even with such systems, human intervention is generally required to build an initial planogram that includes detailed information relative to a bounding box that can include product identification, placement, and count. Substantial human intervention can also be required to update the planogram, as well as search for misplaced product inventory. SUMMARY A low cost, accurate, and scalable camera system for product or other inventory monitoring can include a movable base. Multiple cameras supported by the movable base are directable toward shelves or other systems for holding products or inventory. A processing module is connected to the multiple cameras and able to construct from the camera derived images an updateable map of product or inventory position. In some embodiments, the described camera system for inventory monitoring can be used for detecting shelf labels; optionally comparing shelf labels to a depth map; defining a product bounding box; associating the bounding box to a shelf label to build a training data set; and using the training data set to train a product classifier. In other embodiments, a system for building a product library can include an image capture unit operated to provide images of items. The system also includes a shelf label detector (which can be a high resolution zoomable camera) and optionally depth map creation unit (which can be provided by laser scanning, time-of-flight range sensing, or stereo imaging), a processing module to optionally compare detected shelf labels to a depth map, define a product bounding box, and associate the bounding box with a shelf label to build a training data set or learn image descriptors. Both the image capture unit and processing module can be mounted on an autonomous robot. Because it represents reality on the shelf, an inventory map such as disclosed herein can be known as a “realogram” to distinguish from conventional planograms. Realograms can be locally stored with a data storage module connected to the processing module. A communication module can be connected to the processing module to transfer realogram data to remote locations, including servers or other supported camera systems, and optionally receive inventory information including planograms to aid in realogram construction. In addition to realogram mapping, this system can be used to detect out of stock products, estimate depleted products, estimate amount of products including in stacked piles, estimate products heights, lengths and widths, build 3D models of products, determine products' positions and orientations, determine whether one or more products are in disorganized on-shelf presentation that requires corrective action such as facing or zoning operations, estimate freshness of products such as produce, estimate quality of products including packaging integrity, locate products, including at home locations, secondary locations, top stock, bottom stock, and in the backroom, detect a misplaced product event (also known as a plug), identify misplaced products, estimate or count the number of product facings, compare the number of product facings to the planogr