Search

CN-116277072-B - Task object processing method and system, camera equipment and mobile robot

CN116277072BCN 116277072 BCN116277072 BCN 116277072BCN-116277072-B

Abstract

The embodiment of the application provides a task object processing method, a task object processing system, image pickup equipment and a mobile robot, which are applied to a robot control technology. The method is applied to image pickup equipment and comprises the steps of collecting image data of a target scene as first image data, detecting whether the first image data contains a target task object, wherein the target task object is a task object of a mobile robot deployed in the target scene, determining position information of the target task object in the target scene when the target task object is detected from the first image data, and sending a task execution request carrying the position information to the mobile robot so that the mobile robot moves to the target task object to execute the target task according to the position information. By the scheme, task objects existing in the target scene can be timely processed.

Inventors

  • LOU YANYANG

Assignees

  • 杭州萤石软件有限公司

Dates

Publication Date
20260505
Application Date
20230504

Claims (18)

  1. 1. A task object processing method applied to an image pickup apparatus disposed within a target scene, the method comprising: collecting image data of the target scene as first image data; Detecting whether the first image data contains a target task object or not, wherein the target task object is a task object of a mobile robot deployed in the target scene; determining a first position of the target task object in the first image data when the target task object is detected from the first image data; Determining whether the first position is located in a target area corresponding to the mobile robot; the method for determining the target area corresponding to the mobile robot comprises the steps of acquiring image data acquired by the camera equipment aiming at the target scene as second image data, carrying out semantic segmentation on the second image data to obtain semantic categories of different image areas in the second image data, determining the semantic categories from each image area of the second image data as image areas with specified semantic categories as the target area corresponding to the mobile robot, wherein the specified semantic categories are preset semantic categories of task execution areas of the mobile robot; If the first position is located in the target area, determining a second position mapped by the first position in a world coordinate system based on a first mapping relation between a pixel coordinate system corresponding to the image pickup device and the world coordinate system where the mobile robot is located, wherein the second position is used as position information of the target task object in the target scene; and sending a task execution request carrying the position information to the mobile robot so that the mobile robot moves to the target task object to execute the target task according to the position information.
  2. 2. The method according to claim 1, wherein after determining, from among the image areas of the second image data, an image area whose semantic category is a specified semantic category as the target area corresponding to the mobile robot, the method further comprises: In the process of performing task execution area inspection by the mobile robot, detecting the mobile robot and/or a task execution end of the mobile robot in real time by using the camera equipment so as to determine the mobile robot and/or the task execution end of the mobile robot, wherein an image area in third image data acquired by the camera equipment is used as a detection area; And adjusting a target area corresponding to the mobile robot based on the detection area.
  3. 3. The method of claim 2, wherein adjusting the target area corresponding to the mobile robot based on the detection area comprises: The detection area is integrated into a target area corresponding to the mobile robot, so as to obtain an adjusted target area, and/or, And merging the region after the expansion of the detection region into a target region corresponding to the mobile robot to obtain an adjusted target region.
  4. 4. The method of claim 1, wherein the imaging parameters of the imaging device are adjustable, the imaging parameters include at least one of a rotation parameter, a movement parameter, and a scaling parameter, the pixel coordinate system corresponding to the imaging device is a pixel coordinate system of fourth image data, the fourth image data is image data acquired when the imaging device is at a preset imaging parameter; The determining, based on a first mapping relationship between a pixel coordinate system corresponding to the image capturing apparatus and a world coordinate system in which the mobile robot is located, a second position mapped by the first position in the world coordinate system includes: Performing feature point matching on the first image data and the fourth image data to obtain feature points successfully matched in the first image data and the fourth image data as target feature points; determining a third position of the target feature point in the first image data and a fourth position in the fourth image data; Determining a second mapping relationship between a pixel coordinate system of the first image data and a pixel coordinate system of the fourth image data based on the third position and the fourth position; Determining a position mapped by the first position in a pixel coordinate system of the fourth image data as a fifth position according to the second mapping relation; And determining the position mapped by the fifth position in the world coordinate system as a second position mapped by the first position in the world coordinate system according to a first mapping relation between a pre-established pixel coordinate system of the fourth image data and the world coordinate system in which the mobile robot is located.
  5. 5. The method according to claim 1, wherein establishing a first mapping relationship between a pixel coordinate system corresponding to the image capturing apparatus and a world coordinate system in which the mobile robot is located includes: Acquiring a plurality of pieces of fifth image data acquired by the camera equipment aiming at the target scene, wherein each piece of fifth image data comprises the mobile robot and/or a task execution end of the mobile robot; Determining the position of the mobile robot and/or the task execution end in the fifth image data as a sixth position corresponding to the fifth image data according to each fifth image data, and sending a position acquisition request carrying the acquisition moment of the fifth image data to the mobile robot so as to determine the position of the mobile robot and/or the task execution end in the world coordinate system at the acquisition moment as a seventh position corresponding to the fifth image data; a first mapping relationship between a pixel coordinate system corresponding to the image capturing apparatus and the world coordinate system is determined based on a sixth position and a seventh position corresponding to each fifth image data.
  6. 6. The method of claim 1, wherein the mobile robot has a plurality of task processing modes, wherein each task processing mode is for processing task objects of at least one object type; Before the sending the task execution request carrying the location information to the mobile robot, the method further includes: Determining the object type of the target task object as a target object type; The sending a task execution request carrying the location information to the mobile robot includes: And sending a task execution request carrying the position information and the target object type to the mobile robot so that the mobile robot processes the task object at the position information according to a task processing mode corresponding to the target object type.
  7. 7. The method of claim 1, wherein the mobile robot is a plurality of; the sending a task execution request for the target task object to the mobile robot includes: Selecting a mobile robot capable of processing the target task object from a plurality of mobile robots as a target mobile robot, wherein the mobile robot capable of processing the target task object comprises a mobile robot capable of processing the object type of the task object and/or a mobile robot capable of processing the task area in the target scene comprises the position of the target task object; and controlling the target mobile robot to process the target task object.
  8. 8. The method according to claim 1, wherein the method further comprises: And when the task object of the mobile robot is not detected from the first image data, or the mobile robot finishes processing the target task object, returning to the step of collecting the image data of the target scene as the first image data.
  9. 9. A task object processing method, characterized by being applied to a mobile robot, the method comprising: Receiving a task execution request carrying position information sent by camera equipment deployed in a target scene; the method comprises the steps of setting a target scene to be deployed by a mobile robot, setting position information to be position information of the target task object in a first image data, setting the first image data to be image data of the target scene collected by the camera device, setting the target task object to be the task object of the mobile robot detected in the first image data, setting the position information to be position information of the target task object in a first position in the first image data when the target task object is detected in the first image data, setting the first position to be position information of the target task object in a target area corresponding to the mobile robot in the first image data, setting the target area corresponding to the mobile robot to be a region corresponding to the task area in the first image data, setting the semantic meaning of the target area corresponding to the mobile robot in the first image data to be the target scene to be the image data, setting the semantic meaning of the camera device to be the image data of the target scene as a second image data, setting the semantic meaning of the second image data, setting the semantic data to be the second image data to be the semantic data, setting the semantic data of the second image data, setting the semantic data to be the second image data to be the target image data, and setting the semantic data to be the semantic data, and setting the semantic data to be the target image of the mobile robot, and setting the second image data to be the target image to be the target region, and the mobile robot, setting the semantic data to be the semantic data, and the semantic data to be the target image, determining a second position mapped by the first position in the world coordinate system based on a first mapping relation between a pixel coordinate system corresponding to the image pickup device and the world coordinate system where the mobile robot is located, wherein the second position is used as position information of the target task object in the target scene; And moving to the target task object according to the position information to execute the target task.
  10. 10. The method of claim 9, wherein the moving to the target task object according to the location information performs a target task, comprising: planning a path based on the position information to obtain a path to be moved; and moving according to the path to be moved, and processing the task object at the position information after the movement is finished.
  11. 11. The method of claim 10, wherein the mobile robot has a plurality of task processing modes, wherein each task processing mode is configured to process task objects of at least one object type, wherein the task execution request further carries a target object type, wherein the target object type is an object type of the target task object determined by the image capturing device; The processing for the task object at the location information after the movement is finished includes: And after the movement is finished, processing the task object at the position information according to the task processing mode corresponding to the target object type.
  12. 12. The method according to claim 10, wherein the performing path planning based on the location information to obtain a path to be moved includes: If the task execution request is received when the mobile robot is in a standby state, determining a moving path from the current position of the mobile robot to the position information as a path to be moved; And if the task execution request is received when the mobile robot is in a task state, planning a moving path of the mobile robot based on the task state and the position information, and taking the moving path as a path to be moved.
  13. 13. The method of claim 12, wherein the task state comprises at least one of a global task state and a burst task state, wherein the global task state is a state in which the mobile robot performs global task processing, and the burst task state is a state in which the mobile robot performs a task sent by the image capturing device; If the task execution request is received when the mobile robot is in a task state, planning a moving path of the mobile robot based on the task state and the position information, wherein the moving path comprises: If the task execution request is received when the mobile robot is in a global task state, determining whether the position information is in an executed task area, if so, immediately determining a moving path moving from the current position of the mobile robot to the position information as a path to be moved, or determining a moving path moving from the position of the global task to the position information as a path to be moved after the global task execution is finished, otherwise, taking the moving path of the global task as the path to be moved; If the task execution request is received when the mobile robot is in a burst task state, determining a moving path from the current position of the mobile robot to the position of the task object corresponding to the burst task state and the position information as a path to be moved under the condition that the mobile robot moves to the task object corresponding to the burst task state, and determining the moving path for moving the position information from the current position of the mobile robot as the path to be moved in the process that the mobile robot processes the task object corresponding to the burst task state.
  14. 14. The method according to claim 10, wherein the performing path planning based on the location information to obtain a path to be moved includes: Writing the position information into a position queue; and planning a moving path of the mobile robot based on at least one position contained in the position queue, and taking the moving path as a path to be moved.
  15. 15. A task object processing system is characterized by comprising an imaging device and a mobile robot; The image capturing device is deployed in a target scene and is used for collecting image data of the target scene as first image data, detecting whether the first image data contains a target task object or not, wherein the target task object is a task object of a mobile robot deployed in the target scene, determining a first position of the target task object in the first image data when the target task object is detected from the first image data, determining whether the first position is located in a target area corresponding to the mobile robot or not, determining whether the target area corresponding to the mobile robot is an area corresponding to a task area of the mobile robot in the first image data, determining a mode of determining the target area corresponding to the mobile robot comprises the steps of acquiring the image data collected by the image capturing device for the target scene as second image data, conducting semantic segmentation on the second image data to obtain semantic categories of different image areas in the second image data, determining whether the first position is located in a target area corresponding to the mobile robot or not in the first image data, determining that the first position is located in a first image area corresponding to the mobile robot, determining a semantic category of the mobile robot as a preset coordinate system, mapping the first image coordinate system of the mobile robot is located in the first image area corresponding to the target world area, and determining a semantic category of the mobile robot is performed based on the first coordinate system, and the method comprises the semantic category mapping between the first image capturing device and the first image data is located in the target world area corresponding to the target area, as location information of the target task object in the target scene; sending a task execution request carrying the position information to the mobile robot; The mobile robot is used for receiving the task execution request sent by the camera equipment and moving to the target task object to execute the target task according to the position information.
  16. 16. An image capturing apparatus comprising a camera, a processor, and a machine-readable storage medium; The camera is used for collecting image data aiming at a target scene; The machine-readable storage medium stores machine-executable instructions executable by the processor to cause the processor to implement the method of any one of claims 1-8.
  17. 17. A mobile robot comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor to cause the processor to implement the method of any of claims 9-14.
  18. 18. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-8 or 9-14.

Description

Task object processing method and system, camera equipment and mobile robot Technical Field The present application relates to the field of robot control technologies, and in particular, to a task object processing method, a task object processing system, an imaging device, and a mobile robot. Background By mobile robot is meant a movable robot for performing a task, such as a sweeping robot, a household robot, etc. In the related art, the time and place for executing tasks by the mobile robot are mostly specified by a user, for example, the user sets 8 points per day of the sweeping robot to clean the whole area or the specified area, so that the sweeping robot cleans the whole area or the specified area 8 points per day. However, outside the user-specified time period, when the garbage is required in the area, the sweeping robot can only wait for the next cleaning cycle to clean the garbage in the area. Therefore, the mobile robot in the related art cannot process in time when the sudden task appears in the traffic scene. Disclosure of Invention The embodiment of the application aims to provide a task object processing method, a task object processing system, imaging equipment and a mobile robot, so as to timely process task objects existing in a target scene. The specific technical scheme is as follows: in a first aspect, an embodiment of the present application provides a task object processing method applied to an image capturing apparatus deployed in a target scene, the method including: collecting image data of the target scene as first image data; Detecting whether the first image data contains a target task object or not, wherein the target task object is a task object of a mobile robot deployed in the target scene; When the target task object is detected from the first image data, determining the position information of the target task object in the target scene, and sending a task execution request carrying the position information to the mobile robot so that the mobile robot moves to the target task object to execute a target task according to the position information. Optionally, the determining the location information of the target task object in the target scene includes: determining a first position of the target task object in the first image data; And determining a second position mapped by the first position in the world coordinate system as position information of the target task object in the target scene based on a first mapping relation between a pixel coordinate system corresponding to the image pickup device and the world coordinate system where the mobile robot is located, wherein the first mapping relation is pre-established. Optionally, after the determining the first position of the target task object in the first image data, the method further includes: Determining whether the first position is located in a target area corresponding to the mobile robot or not, wherein the target area corresponding to the mobile robot is an area corresponding to a task area of the mobile robot in the target scene in the first image data; and if the first position is located in the target area, executing the step of determining a second position mapped by the first position in the world coordinate system based on a first mapping relation between a pixel coordinate system corresponding to the image pickup device and the world coordinate system where the mobile robot is located, wherein the first mapping relation is pre-established. Optionally, determining the target area corresponding to the mobile robot includes: acquiring image data acquired by the image pickup device aiming at the target scene as second image data; performing semantic segmentation on the second image data to obtain semantic categories of different image areas in the second image data; And determining an image area with a semantic category as a specified semantic category from the image areas of the second image data as a target area corresponding to the mobile robot, wherein the specified semantic category is a preset semantic category of a task execution area of the mobile robot. Optionally, after determining, from the image areas of the second image data, that the semantic category is an image area of the specified semantic category as the target area corresponding to the mobile robot, the method further includes: In the process of performing task execution area inspection by the mobile robot, detecting the mobile robot and/or a task execution end of the mobile robot in real time by using the camera equipment so as to determine the mobile robot and/or the task execution end of the mobile robot, wherein an image area in third image data acquired by the camera equipment is used as a detection area; And adjusting a target area corresponding to the mobile robot based on the detection area. Optionally, the adjusting the target area corresponding to the mobile robot based on the detection area includes: The detection area is integ