US-12617098-B2 - Method, apparatus, and system for automating operation of a machine
Abstract
A robotic system for operating a machine includes an arm, an actuator coupled to the arm and configured to indicate a change in an interactive medium of the machine when the actuator physically interfaces with the interactive medium, a visual source configured to provide an image of the machine in a field of view; and a processor coupled to the arm and the visual source. Images obtained from the field of view can be used to extract information about the machine and that information can be used to operate the arm to actuate the interactive medium of machine and perform the physical task using the machine.
Inventors
- Ondrej KRAJICEK
- Aleš Pernikář
- Václav Notvotný
- Jiří Kyzlink
- JAKUB PAVLÁK
- Barbora Koudarová
- Oto Dušek
- Lukáš Valek
Assignees
- Y SOFT CORPORATION, A.S.
Dates
- Publication Date
- 20260505
- Application Date
- 20221221
Claims (20)
- 1 . A robotic system for operating a machine, the machine comprising an interface having an interactive medium for use in performing a physical task using the machine, the robotic system comprising: a robotic arm comprising a proximal end and a distal end; an actuator coupled to the robotic arm at the distal end, the actuator being configured to physically interact with the interactive medium; a visual source configured to provide an image of the machine in a field of view; a processor coupled to the robotic arm and the visual source; and a memory coupled to the processor and configured to store computer executable programs comprising instructions that upon execution by the processor: extract, from the image, information indicating a metric for identifying the interface within the field of view, wherein the metric comprises at least one of a location, an orientation, a dimension, and a boundary of the interface in the field of view; identify the interface within the field of view using the information extracted from the image; operate the robotic arm to interact with the identified interface via the actuator and receive, from the robotic arm, a signal indicating detection of a change in the interactive medium by the actuator; extract information indicating a characteristic of the interactive medium based on the signal from the actuator indicating the detection of the change in the interactive medium, wherein the characteristic of the interactive medium includes position of at least one element of the interactive medium; and operate the robotic arm to actuate the at least one element of the interactive medium to perform the physical task using the machine.
- 2 . The robotic system of claim 1 , wherein the instructions, upon execution, analyze the information indicating the metric for identifying the interface within the field of view to determine if additional information for identifying the interface is required and in an event the additional information is required, obtain at least one additional image from the visual source.
- 3 . The robotic system of claim 2 , wherein the visual source comprises a camera configured to rotate around the machine in the field of view to obtain the image of the machine in a field of view.
- 4 . The robotic system of claim 3 , wherein the system is configured to adjust the visual source to obtain the at least one additional image.
- 5 . The robotic system of claim 1 , wherein the actuator comprises a pressure sensor disposed at a distal tip of the actuator that generates the signal indicating detection of the change in the interactive medium by the actuator upon sensing at least one of a resistance change or a displacement in the interactive medium.
- 6 . The robotic system of claim 1 , wherein the visual source is configured to provide a plurality of images and wherein the instructions, upon execution, analyze each image provided by the visual source and score each image based on at least one of an amount of the information indicating the metric for identifying the interface within the field of view or a probability of accurate identification of the interface within the field of view and exclude images having a score less than a predetermined score from being used for identifying the interface within the field of view.
- 7 . The robotic system of claim 6 , wherein the instructions, upon execution, score the plurality of images based on previously validated data.
- 8 . The robotic system of claim 7 , wherein the previously validated data comprises at least one of: data obtained from a glyph dictionary, data provided by a human operator of the machine, data about the machine obtained from a remote entity via a communications network, data obtained from an original manual of the machine, and data obtained from an online resource for the machine.
- 9 . The robotic system of claim 8 , wherein the glyph dictionary comprises a collection of images of common interactive medium elements.
- 10 . The robotic system of claim 8 , wherein the instructions, upon execution, update the glyph dictionary to include the metric for identifying the interface within the field of view obtained from the information extracted from at least one image.
- 11 . The robotic system of claim 1 , wherein the memory is configured to store a list of physical tasks for performing using interactive medium of the machine.
- 12 . The robotic system of claim 11 , wherein the robotic system is configured to obtain the list of physical tasks from at least one of a human operator, an original manual of the machine, and an online resource for the machine.
- 13 . The robotic system of claim 12 , wherein the memory is configured to store a ranking for each physical task in the list of physical tasks and wherein the instruction, open execution, operate the robotic arm to perform a physical task having a higher ranking before a physical task having a lower ranking.
- 14 . The robotic system of claim 1 , wherein the instructions, upon execution, extract, from the image, the information indicating the metric for identifying the interface within the field of view using an image processing scheme that employs a deep learning framework to identify the interface.
- 15 . The robotic system of claim 14 , wherein the deep learning framework comprises at least one of supervised deep learning, unsupervised deep learning, or reinforced deep learning.
- 16 . The robotic system of claim 1 , further comprising an optical sensor configured to measure a light intensity within the field of view.
- 17 . The robotic system of claim 16 , wherein the instructions, upon execution, extract use the light intensity measured by the optical sensor to extract the metric for identifying the interface within the field of view.
- 18 . The robotic system of claim 16 , wherein the visual source comprises a camera configured to obtain the image of the machine in a field of view and wherein the system is configured to adjust the light intensity prior to obtaining the image.
- 19 . The robotic system of claim 1 , wherein the visual source is coupled to the robotic system via a communications network.
- 20 . The system of claim 1 , wherein the processor is configured to execute the instructions in response to a voice command.
Description
RELATED APPLICATIONS This application claims the benefit of and priority to U.S. Provisional Application No. 63/292,316, filed on Dec. 21, 2021, the entire teachings of which are incorporated herein by reference. FIELD This application generally relates to methods, system, and apparatus for automating the operation of a machine, and more specifically automating the operation of a machine using a robotic system. BACKGROUND Automation of the operation of a machine that is normally manually operated can be a difficult and complex process as it requires use of automation systems to interact with interfaces designed for use by a human operator. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a block diagram of a robotic system, according to some embodiments disclosed herein. FIG. 1B is a schematic of a system for testing an embedded system of a device, according to some embodiments disclosed herein. FIG. 1C is a block diagram of an example of an Observable User Interface of a device under test, according to some embodiments disclosed herein. FIG. 1D is a block diagram of an example of an Observable User Interface of a device under test, according to some embodiments disclosed herein. FIG. 1E is a block diagram of an example state of the Observable User Interface of a device under test, according to some embodiments disclosed herein. FIG. 1F is a block diagram of an example state of the Observable User Interface of a device under test, according to some embodiments disclosed herein. FIG. 1G is a flow diagram of a method of constructing a descriptor, according to some embodiments disclosed herein. FIG. 1H is an example step of a method of processing an image, according to some embodiments disclosed herein. FIG. 1I is an example of procedures for processing an image, according to some embodiments disclosed herein. FIG. 1J is an example of procedures for processing an image, according to some embodiments disclosed herein. FIG. 1K is example graph of comparative results of the time of evaluation according to some embodiments disclosed herein. FIG. 2 is a block diagram of an electronic circuitry, according to some embodiments disclosed herein. FIG. 3 is a flow diagram of the procedures for operating the robotic system, according to some embodiments disclosed herein. FIG. 4 is a flow diagram of the procedures for calibrating a robotic system, according to some embodiments disclosed herein FIG. 5 is a flow diagram for the preparing a dictionary of the elements of the interactive medium, their associated tasks, and/or associated physical outcomes, according to some embodiments disclosed herein. FIG. 6 is a flow diagram of the procedures for identifying the preferred physical task for performing using the machine, according to some embodiments disclosed herein. FIG. 7 is a flow diagram of the procedures for executing the preferred physical task for performing using the machine in a cyclic manner, according to some embodiments disclosed herein. FIG. 8 is a block diagram of an example of a robotic system according to some embodiment disclosed herein. FIG. 9 is a block diagram of an example of robotic system according to one embodiment disclosed herein. FIG. 10 is a block diagram of an example of robotic system, according to one embodiment disclosed herein. FIG. 11 is a block diagram of an example of robotic system, according to one embodiment disclosed herein. SUMMARY A robotic system for operating a machine having an interface with an interactive medium for use in performing a physical task using the machine can include an arm, an actuator coupled to the arm for indicating a change in the interactive medium when the actuator interfaces with the interactive medium, a visual source for providing an image of the machine in a field of view, a processor coupled to the arm and the visual source, and a memory coupled to the processor and configured to store computer executable programs. The computer executable programs can include instructions, that upon execution by the processor, extract, from the image, information indicating a metric for identifying the interface within the field of view, extract information indicating a characteristic of the interactive medium based on signals from the actuator indicating the change in the interactive medium, and operate the arm to actuate the interactive medium to perform the physical task using the machine. The arm can be a robotic arm having a proximal and a distal end. The actuator can be coupled to the arm. For example, the actuator can be coupled to the distal end of the arm. The actuator can include a pressure sensor coupled to its distal end that indicates the change by sensing a change in resistance when the distal end of the actuator interfaces with the interactive medium. Although described in connection with a pressure sensor, any suitable sensor available in the art can be used with the embodiments disclosed herein (e.g., an optical sensor). The pressure sensor and/or the optical s