CN-116709962-B - Sundry cleaning robot system
Abstract
The robot is operated to navigate the environment using the camera and map the type, size and position of the object. The system determines the type, size, and location of the object and classifies the object for association with a particular container. For each type of object with a corresponding container, the robot selects a particular object in that type for pick up, performs path planning and navigates to that type of object to gather or pick up the objects. The actuated pusher arm moves the other objects away and maneuvers the target object onto the front bucket for handling.
Inventors
- J.D. Hamilton
- K. F. Wolf
- J. A. Bannister Sutton
- B. J. Frizel
Assignees
- 克拉特博特股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20211130
- Priority Date
- 20201130
Claims (20)
- 1. A method of operating a robot, comprising: associating each of a plurality of object categories of objects in an environment with a corresponding container located in the environment; activating the robot at the base station; navigating the robot around the environment using a camera to map the type, size, and location of the object; For each object class: selecting one or more objects to pick up in the category; Performing a path planning from a current position of the robot to one or more of the objects to be picked up; Navigating to one or more adjacent points of the object to be picked up; Actuating a manipulator coupled to open and close across a front of a bucket at a front end of the robot to remove an obstacle and maneuver the one or more objects onto the bucket; Wherein the manipulator comprises a first manipulator member coupled to a first manipulator pivot point adjacent a first front side of the bucket and a second manipulator member coupled to a second manipulator pivot point adjacent a second front side of the bucket, and Wherein the first manipulator pivot point allows the first manipulator member to open and close across the first front side of the bucket and the second manipulator pivot point allows the second manipulator member to open and close across the second front side of the bucket; Tilting or raising the bucket and actuating the manipulator to hold the object in one or both of the buckets; navigating the robot to the corresponding container adjacent to the category; Aligning the rear end of the robot with the side of the corresponding container, and The bucket is raised above the robot and toward the rear end of the robot along a path that arcs over the chassis of the robot from the front end of the robot to the rear end of the robot to store the held objects in the corresponding containers.
- 2. The method according to claim 1, wherein the method further comprises: the robot is operated to group the objects in the environment into clusters, wherein each cluster comprises only objects from one of the categories.
- 3. The method according to claim 1, wherein the method further comprises: operating at least one first arm to actuate the manipulator of the robot to remove an obstacle and maneuver the one or more objects onto the bucket, and At least one second arm is operated to tilt or raise the bucket.
- 4. The method of claim 1, wherein each first arm is paired with a corresponding second arm, and further comprising: Each pair of first and second arms is operated from a common starting pivot point.
- 5. The method of claim 1, wherein actuating the manipulator of the robot to remove an obstacle comprises actuating the manipulator to form a wedge in front of the bucket.
- 6. The method of claim 5, wherein actuating the manipulator to retain the object in the bucket comprises actuating the manipulator to form a barrier in front of the bucket.
- 7. The method according to claim 1, wherein the method further comprises: a neural network is operated to determine the type, size, and location of the object from the image from the camera.
- 8. The method according to claim 1, wherein the method further comprises: Generating scale-invariant keypoints within a collation area of the environment based on inputs from left and right cameras; Detecting a position of the object in the collation area based on the inputs from the left and right cameras, thereby defining a start position; classifying the objects into the categories; Generating a re-identification fingerprint of the objects, wherein the re-identification fingerprint is used for determining visual similarity between the objects; positioning the robot within the collation area based on input from at least one of the left camera, the right camera, light detection and ranging (LIDAR) sensor, and Inertial Measurement Unit (IMU) sensor to determine a robot position; mapping the consolidated area to create a global area map, the global area map including the scale-invariant keypoints, the objects, and the starting locations, and The object is re-identified based on at least one of the starting location, the category, and the re-identified fingerprint.
- 9. The method of claim 8, wherein the method further comprises: Assigning a persistent unique identifier to the object; receiving a camera screen from an augmented reality robot interface installed as an application on a mobile device; Updating the global region map with the starting position and the scale-invariant key points using a camera frame-to-global region map conversion based on a camera frame, and An indicator is generated for the object, wherein the indicator includes one or more of a next target, a target order, a hazard, an oversized, fragile, messy, and a blocked travel path.
- 10. The method according to claim 9, wherein the method further comprises: transmitting the global region map and object details to the mobile device, wherein the object details include at least one of a visual snapshot of the object, the category, the starting location, the persistent unique identifier, and the indicator; displaying the updated global region map, the object, the starting location, the scale-invariant keypoints and the object details on the mobile device using the augmented reality robot interface; Receiving input from the augmented reality robot interface, wherein the input indicates object property coverage, including changing object class, next step good, no good, and modifying user indicators; transmitting the object property overlay from the mobile device to the robot, and The global area map, the indicator, and the object detailed information are updated based on the object attribute overlay.
- 11. A robotic system, comprising: A robot; A base station; a plurality of containers, each container associated with one or more object categories; Mobile application program, and Logic for: Navigating the robot around an environment comprising a plurality of objects to map the type, size and position of the objects; For each of the categories: Selecting one or more objects to pick up in the category; performing path planning on the object to be picked up; Navigating to a point adjacent to each of said objects to be picked up; actuating a manipulator coupled to open and close across a front end of a bucket at a front end of the robot to remove an obstacle and maneuver the one or more objects onto the bucket; Wherein the manipulator comprises a first manipulator member coupled to a first manipulator pivot point adjacent a first front side of the bucket and a second manipulator member coupled to a second manipulator pivot point adjacent a second front side of the bucket, and Wherein the first manipulator pivot point allows the first manipulator member to open and close across the first front side of the bucket and the second manipulator pivot point allows the second manipulator member to open and close across the second front side of the bucket; tilting or raising the bucket and actuating the manipulator to hold the object to be picked up in one or both of the buckets; navigating the robot to a corresponding container adjacent to the category; Aligning the rear end of the robot with the side of the corresponding container, and The bucket is raised above the robot and toward the rear end of the robot along a path that arcs over the chassis of the robot from the front end of the robot to the rear end of the robot to store the held objects in the corresponding containers.
- 12. The robotic system of claim 11, further comprising logic for operating the robot to group the objects in the environment into clusters, wherein each cluster includes only objects from one of the categories.
- 13. The robotic system of claim 11, wherein the robot comprises at least one first arm and at least one second arm, the system further comprising: Logic for actuating the manipulator of the robot to remove an obstacle and push the one or more objects onto the bucket and operate at least one second arm to tilt or raise the bucket.
- 14. The robotic system of claim 11, wherein each first arm is paired with a corresponding second arm, and each pair of first and second arms has a common starting pivot point.
- 15. The robotic system of claim 11, further comprising logic for actuating the manipulator of the robot to form a wedge in front of the bucket.
- 16. The robotic system of claim 15, further comprising logic for actuating the manipulator to form a closed barrier in front of the bucket.
- 17. The robotic system of claim 11, wherein the system further comprises: A neural network configured to determine a type, size, and location of the object from an image from a camera.
- 18. The robotic system of claim 11, further comprising means for: Generating scale-invariant keypoints within a collation area of the environment based on inputs from left and right cameras; Detecting a position of the object in the collation area based on the inputs from the left and right cameras, thereby defining a start position; classifying the objects into the categories; Generating a re-identification fingerprint of the objects, wherein the re-identification fingerprint is used for determining visual similarity between the objects; Positioning the robot within the collation area to determine robot position, and Mapping the consolidated area to create a global area map, the global area map including the scale-invariant keypoints, the objects, and the starting positions.
- 19. The robotic system of claim 18, further comprising means for: the object is re-identified based on at least one of the starting location, the category, and the re-identified fingerprint.
- 20. The robotic system of claim 19, further comprising means for: the objects are classified as one or more of dangerous, oversized, fragile, and messy.
Description
Sundry cleaning robot system Cross Reference to Related Applications The present application is based on the priority and benefit of U.S. application Ser. No. 63/119,533 entitled "debris removal robot System", filed 11/30/2020, as claimed by 35USC 119 (e), the entire contents of which are incorporated herein by reference. The present application is also based on priority and benefit of U.S. application Ser. No. 63/253,867 entitled "augmented reality robot interface," filed on 8 th 10 th 2021, as claimed in 35USC 119 (e), the entire contents of which are incorporated herein by reference. Background Objects under the foot are not only unpleasant, but also present a safety hazard. Thousands of people fall in their home and are injured each year. The stacking of loose items on the floor may represent a hazard, but many people do not have enough time to deal with debris problems in their homes. Automatic cleaning or grooming robots may be an effective solution. While a fully automated collation robot with basic seek capabilities may be sufficient to accomplish the task of picking up objects on the floor of any room, a user may wish to be able to more finely control the robot's behavior as well as the classification and destination of particular objects unique to their home. The robot may also need to communicate with the user clearly when tasks are blocked or encounter context parameters that are not considered in the robot programming. However, most users may not be experts in the field of robotics and artificial intelligence. Thus, there is a need for a way for a user to interact with a collation robot in an intuitive and powerful manner, without relying on in-depth programming knowledge. Such an interaction process may allow a user to train the robot based on their particular collation requirements, instruct the robot to perform actions outside of a preset routine, and receive instructions when the robot needs assistance to continue or complete an assigned task. Drawings To easily identify a discussion of any particular element or act, the most significant digit(s) in a reference number refers to the reference number in which that element was first introduced. Fig. 1 illustrates a robotic system 100 according to one embodiment. Fig. 2A shows a top view of a robot 200 with a bucket (bucket) in a downward position and an actuator arm (manipulator) in an open configuration, according to one embodiment. Fig. 2B illustrates a perspective view of the robot 200 with the bucket in a downward position and the manipulator in an open configuration, according to one embodiment. Fig. 2C illustrates a front view of the robot 200 with the bucket in a downward position and the manipulator in an open configuration, according to one embodiment. Fig. 2D illustrates a side view of the robot 200 with the bucket in a downward position and the manipulator in an open configuration, according to one embodiment. Fig. 2E shows a top view of the robot 200 with the bucket in an up (lifted) position and the manipulator in an open configuration, according to one embodiment. Fig. 2F shows a perspective view of the robot 200 with the bucket in an up (lifted) position and the manipulator in an open configuration, according to one embodiment. Fig. 2G illustrates a front view of the robot 200 with the bucket in an up (raised) position and the manipulator in an open configuration, according to one embodiment. Fig. 2H shows a side view of the robot 200 with the bucket in an up (lifted) position and the manipulator in an open configuration, according to one embodiment. Fig. 2I shows a top view of the robot 200 with the bucket in a down position and the manipulator in a closed configuration, according to one embodiment. Fig. 2J illustrates a perspective view of the robot 200 with the bucket in a downward position and the manipulator in a closed configuration, according to one embodiment. Fig. 2K shows a perspective view of the robot 200 with the bucket in a downward position and the manipulator in a closed configuration, according to one embodiment. Fig. 2L shows a side view of the robot 200 with the bucket in a downward position and the manipulator in a closed configuration, according to one embodiment. FIG. 3 illustrates one aspect of a subject matter in accordance with one embodiment. Fig. 4A shows a front view of a robot 400 in another embodiment. Fig. 4B shows a perspective view of the robot 400. Fig. 4C shows a side view of the robot 400. Fig. 4D shows a robot 400 with a manipulator in a raised orientation. Fig. 5 shows a robot 500 according to yet another embodiment. Fig. 6A to 6D show a robot 600 according to yet another embodiment. Fig. 7A-7D illustrate aspects of a robot 700 according to one embodiment. Fig. 8A illustrates a lowered bucket position and a lowered grip position 800a of a robot 700 according to one embodiment. Fig. 8B illustrates a lowered bucket position and a raised grip position 800B of the robot 700 according to one embo