Search

US-12619228-B2 - Visual light-based direction to robotic system

US12619228B2US 12619228 B2US12619228 B2US 12619228B2US-12619228-B2

Abstract

According to one embodiment, a method, computer system, and computer program product for light-based navigation of robotic device. The embodiment may include detecting a light beam. The embodiment may include identifying a source location and an endpoint location of the light beam. The endpoint location comprises a location where the light beam intersects a surface. The embodiment may include receiving a voice command to proceed to the endpoint location. The embodiment may include instructing a mobile robotic device to proceed directly to the endpoint location. In response to the mobile robotic device reaching the endpoint location, the embodiment may include instructing the mobile robotic device to perform an activity therein.

Inventors

  • Shailendra Moyal
  • Akash U. Dhoot
  • Shilpa Bhagwatprasad Mittal
  • Sarbajit K Rakshit

Assignees

  • INTERNATIONAL BUSINESS MACHINES CORPORATION

Dates

Publication Date
20260505
Application Date
20220524

Claims (11)

  1. 1 . A computer-implemented method, the method comprising: detecting a light beam; identifying a source location and an endpoint location of the light beam, wherein the endpoint location comprises a location where the light beam intersects a surface; identifying at least one other light beam, wherein a source location of the at least one other light beam is separate from the source location of the light beam; identifying an activity area based on a convergence of the at least one other light beam and the light beam, wherein the activity area comprises a location where the endpoint location of the light beam converges with an endpoint location of the at least one other light beam; receiving a voice command to proceed to the activity area; instructing a mobile robotic device to proceed directly to the activity area, wherein a rate of speed at which the mobile robotic device proceeds directly to the activity area is based on a luminance value of the detected light beam; instructing the mobile robotic device to identify a boundary of the activity area based on image analysis of a diameter of the light beam at the endpoint location of the light beam; in response to the mobile robotic device entering the activity area, instructing the mobile robotic device to perform an activity within the boundary of the activity area; creating a three-dimensional (3D) map of the activity area; and displaying the 3D map via a screen of a computing device of a user.
  2. 2 . The method of claim 1 , further comprising: identifying a movement path of the light beam; receiving a voice command to follow the movement path; and instructing the mobile robotic device to follow the movement path and perform the activity while following the movement path, wherein a speed at which the mobile robotic device follows the movement path of the light beam is based on the luminance value of the detected light beam.
  3. 3 . The method of claim 1 , wherein a type of the activity is specified based on a color of the detected light beam.
  4. 4 . The method of claim 1 , wherein the mobile robotic device is selected from a group consisting of an autonomous flying platform, an autonomous marine platform, and a land surface moving robot.
  5. 5 . A computer system, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: detecting a light beam; identifying a source location and an endpoint location of the light beam, wherein the endpoint location comprises a location where the light beam intersects a surface; identifying at least one other light beam, wherein a source location of the at least one other light beam is separate from the source location of the light beam; identifying an activity area based on a convergence of the at least one other light beam and the light beam, wherein the activity area comprises a location where the endpoint location of the light beam converges with an endpoint location of the at least one other light beam; receiving a voice command to proceed to the activity area; instructing a mobile robotic device to proceed directly to the activity area, wherein a rate of speed at which the mobile robotic device proceeds directly to the activity area is based on a luminance value of the detected light beam; instructing the mobile robotic device to identify a boundary of the activity area based on image analysis of a diameter of the light beam at the endpoint location of the light beam; in response to the mobile robotic device entering the activity area, instructing the mobile robotic device to perform an activity within the boundary of the activity area; creating a three-dimensional (3D) map of the activity area; and displaying the 3D map via a screen of a computing device of a user.
  6. 6 . The computer system of claim 5 , further comprising: identifying a movement path of the light beam; receiving a voice command to follow the movement path; and instructing the mobile robotic device to follow the movement path and perform the activity while following the movement path, wherein a speed at which the mobile robotic device follows the movement path of the light beam is based on the luminance value of the detected light beam.
  7. 7 . The computer system of claim 5 , wherein a type of the activity is specified based on a color of the detected light beam.
  8. 8 . The computer system of claim 5 , wherein the mobile robotic device is selected from a group consisting of an autonomous flying platform, an autonomous marine platform, and a land surface moving robot.
  9. 9 . A computer program product, the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor capable of performing a method, the method comprising: detecting a light beam; identifying a source location and an endpoint location of the light beam, wherein the endpoint location comprises a location where the light beam intersects a surface; identifying at least one other light beam, wherein a source location of the at least one other light beam is separate from the source location of the light beam; identifying an activity area based on a convergence of the at least one other light beam and the light beam, wherein the activity area comprises a location where the endpoint location of the light beam converges with an endpoint location of the at least one other light beam; receiving a voice command to proceed to the activity area; instructing a mobile robotic device to proceed directly to the activity area, wherein a rate of speed at which the mobile robotic device proceeds directly to the activity area is based on a luminance value of the detected light beam; instructing the mobile robotic device to identify a boundary of the activity area based on image analysis of a diameter of the light beam at the endpoint location of the light beam; in response to the mobile robotic device entering the activity area, instructing the mobile robotic device to perform an activity within the boundary of the activity area; creating a three-dimensional (3D) map of the activity area; and displaying the 3D map via a screen of a computing device of a user.
  10. 10 . The computer program product of claim 9 , further comprising: identifying a movement path of the light beam; receiving a voice command to follow the movement path; and instructing the mobile robotic device to follow the movement path and perform the activity while following the movement path, wherein a speed at which the mobile robotic device follows the movement path of the light beam is based on the luminance value of the detected light beam.
  11. 11 . The computer program product of claim 9 , wherein a type of the activity is specified based on a color of the detected light beam.

Description

BACKGROUND The present invention relates generally to the field of computing, and more particularly to robotics. Robotics is an interdisciplinary branch of computer science and engineering which involves design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. Robotics seeks to design machines (i.e., robots) that can autonomously, or semi-autonomously, perform physical tasks on behalf of a human. These machines may substitute for humans by replicating human actions and may be used in many situations for many purposes. Typically, robots perform tasks which are either highly repetitive or too dangerous for a human to carry out safely. For instance, robots may be used in environments where humans are likely to be harmed or cannot survive. While robots may be constructed in many forms, these machines utilize a variety of sensors, actuators, and data processing techniques to interact with the physical world. A robot may be guided by an external input or control device, may have guidance control embedded within, or may utilize a combination of external and internal inputs for guidance. SUMMARY According to one embodiment, a method, computer system, and computer program product for light-based navigation of robotic device. The embodiment may include detecting a light beam. The embodiment may include identifying a source location and an endpoint location of the light beam. The endpoint location comprises a location where the light beam intersects a surface. The embodiment may include receiving a voice command to proceed to the endpoint location. The embodiment may include instructing a mobile robotic device to proceed directly to the endpoint location. In response to the mobile robotic device reaching the endpoint location, the embodiment may include instructing the mobile robotic device to perform an activity therein. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. The various features of the drawings are not to scale as the illustrations are for clarity in facilitating one skilled in the art in understanding the invention in conjunction with the detailed description. In the drawings: FIG. 1 illustrates an exemplary networked computer environment according to at least one embodiment. FIG. 2 illustrates an operational flowchart for directing navigation of a robotic device via a light source in a light-based navigation process according to at least one embodiment. FIG. 3 is a functional block diagram of internal and external components of computers and servers depicted in FIG. 1 according to at least one embodiment. FIG. 4 depicts a cloud computing environment according to an embodiment of the present invention. FIG. 5 depicts abstraction model layers according to an embodiment of the present invention. DETAILED DESCRIPTION Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces unless the context clearly dictates otherwise. Embodiments of the present invention relate generally to the field of computing, and more particularly to robotics. The following described exemplary embodiments provide a system, method, and program product to, among other things, instruct one or more robotic devices to traverse a path, or proceed to a goal location, identified by a beam of light. Therefore, the present embodiment has the capacity to improve the technical field of robotics by dynamically identifying a direction of movement (e.g., a path) and/or a destination location utilizing a beam of light and instructing a robotic device to move according to the identified direction of movement and/or to the identified destination location. As previously described, robotics is an interdisciplinary branch of computer science and engineering which involves design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. Robotics seeks to design machines (i.e., robots) that can autonomously, or semi-autonomously,