CN-122019176-A - Robot vision edge computing system and computing method thereof
Abstract
A robot vision edge computing system and a computing method thereof comprise a perception layer, a reconfigurable computing platform and a robot decision layer, wherein the reconfigurable computing platform comprises an input adaptation and buffer module, a dynamic scheduler and a time slicing scheduler, the input adaptation and buffer module is used for receiving multi-source sensing data streams of the perception layer and conducting hardware-level time stamp synchronization and format unification, the dynamic scheduler is used for receiving mode instructions from the robot decision layer and generating hardware control signals according to the mode instructions so as to control switching paths of the multi-source sensing data streams and allocate computing resources in the platform, the algorithm container loading and executing unit is used for loading and running algorithm containers, and the time slicing scheduler is activated and controlled by the dynamic scheduler and used for periodically switching the algorithm containers running in the algorithm container loading and executing unit at a fixed time slice in a hardware layer. The invention further comprises a calculation method of the robot vision edge calculation system. The invention can realize deterministic scheduling and efficient multiplexing of computing resources.
Inventors
- LI JINBO
- YANG CHAO
- LIU MIN
Assignees
- 长沙万为机器人有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260203
Claims (10)
- 1. The robot vision edge computing system is characterized by comprising a perception layer, a reconfigurable computing platform and a robot decision layer which are sequentially connected, wherein the reconfigurable computing platform comprises: The input adaptation and buffering module is used for receiving the multi-source sensing data stream of the sensing layer and synchronizing the hardware-level time stamps and unifying the formats; The dynamic scheduler is used for receiving a mode instruction from the robot decision layer, generating a hardware control signal according to the mode instruction, controlling a switching path of the multi-source sensing data stream and distributing computing resources in the platform; The algorithm container loading and executing unit is used for loading and running the algorithm container; a time slicing scheduler, the activation of which is controlled by the dynamic scheduler, for periodically switching the algorithm container running in the algorithm container loading and executing unit at a fixed time slice at a hardware level; The dynamic scheduler is configured to control the multi-source sensing data flow of the input adaptation and buffer module to pass through to the algorithm container specified in the algorithm container loading and executing unit and close the time slicing scheduler in response to a first mode instruction from the robot decision layer, control the multi-source sensing data flow of the input adaptation and buffer module to be distributed among a plurality of algorithm containers in the algorithm container loading and executing unit in response to a second mode instruction from the robot decision layer, and activate the time slicing scheduler to control time-sharing execution of the plurality of algorithm containers.
- 2. The robotic vision edge computing system of claim 1, wherein the time-slicing scheduler comprises a programmable timer for generating a periodic hardware interrupt signal and a context flash save/restore circuit for saving an execution context of a current algorithm container at a hardware level and loading an execution context of a next algorithm container through a direct memory access channel in response to the hardware interrupt signal.
- 3. The robotic vision edge computing system of claim 1, wherein the algorithmic container loading and execution unit further comprises the following sub-modules working in concert: the container mirror image warehouse is used for storing a plurality of algorithm container mirror image files; The unified quantization engine module is used for executing standardized quantization operation on the deep learning model and outputting an optimized model file to the container mirror image warehouse; The container loading engine module is used for loading the appointed algorithm container mirror image file from the container mirror image warehouse according to the loading instruction and deploying the algorithm container mirror image file into the isolated memory area divided by the hardware memory management unit; and running the sandbox, creating by the container loading engine module when loading the image file of the algorithm container, and providing an operation environment which is forcedly isolated by the hardware memory management unit and the dynamic scheduler for the algorithm container.
- 4. The robotic vision edge computing system of claim 3, wherein the reconfigurable computing platform further comprises a cross-container shared context management unit coupled to the algorithm container loading and execution unit for providing a memory region shared for access by multiple algorithm containers such that algorithm containers executing within different time slices can exchange intermediate awareness data through the shared memory region.
- 5. The robotic vision edge computing system of claim 4, wherein the reconfigurable computing platform further comprises an output fusion and synchronization module configured to collect the processing results from each of the running sandboxes and the fusion information in the shared cross-container shared context management unit, time-stamp align and fuse the output results of each of the algorithm containers, and generate structured perception information for sending to the robotic decision-making layer.
- 6. The robotic vision edge computing system of claim 4 or 5, wherein in the second mode of instruction, the time-slicing scheduler cooperates with the cross-container shared context management unit such that after a first algorithmic container executing on a first timeslice writes a result to the shared memory region, a second algorithmic container executing on a second timeslice can directly read the result, forming a virtual collaborative awareness pipeline.
- 7. The robotic vision edge computing system of claim 3 or 4 or 5, wherein the reconfigurable computing platform further enables reliability assurance of the system by cooperation of: The algorithm container loading and execution unit is configured to provide fault isolation through its hardware isolated running sandbox; the container loading engine module is configured to perform a restart operation on a specified algorithm container in response to an external reset signal; The dynamic scheduler is configured to trigger a system switch from the second mode back to the first mode in response to an external mode switch signal.
- 8. A calculation method based on the robot vision edge calculation system according to any one of claims 1 to 7, characterized by comprising a mode switching method: receiving a mode instruction from a robot decision layer; analyzing the mode instruction through a dynamic scheduler and generating a hardware control signal; If the instruction is the first mode instruction, executing the first configuration, namely closing a time slicing scheduler, leading the input multi-source sensing data stream to an algorithm container designated in an algorithm container loading and executing unit, and intensively distributing computing resources to the algorithm container; And if the second mode instruction is the second mode instruction, executing a second configuration, namely activating a time slicing scheduler, circularly distributing the input multi-source sensing data flow among a plurality of algorithm containers in the algorithm container loading and executing unit, and controlling the plurality of algorithm containers to be circularly executed by the time slicing scheduler according to time slices.
- 9. The method of computing a robotic visual edge computing system according to claim 8, further comprising, in the second configuration, a cooperative method of: the time slicing scheduler generates hardware interrupt according to a fixed period, triggers hardware level context switching and schedules a plurality of algorithm containers in a rotating mode; in any N time slice, the scheduled algorithm container writes the output perception data into the shared memory area managed by the cross-container shared context management unit, wherein N is more than or equal to 1; In the following n+1th time slice, the scheduled other algorithm container directly reads the perceived data from the shared memory area for processing; and realizing data relay and collaborative perception assembly line among a plurality of algorithm containers through the shared memory area and the periodic time slice scheduling.
- 10. The method for computing a robotic visual edge computing system according to claim 8 or 9, further comprising the steps of: s1, performing standardized quantization on a trained deep learning model by utilizing a unified quantization engine module to generate an optimized model file, and packaging the optimized model file with a corresponding algorithm executable program and a dependency library to form an algorithm container mirror image file conforming to a platform specification; S2, uploading the algorithm container mirror image file and storing the algorithm container mirror image file in a container mirror image warehouse; S3, responding to a loading instruction, reading a specified algorithm container image file from the container image warehouse through a container loading engine module, deploying the algorithm container image file into an operation sandbox, and distributing an isolated and resource-controlled operation environment in the operation sandbox; And S4, operating the algorithm container in the operation sandbox, performing time-sharing scheduling by the time-slicing scheduler in a hardware interrupt mode in the execution process of the algorithm container when the system works in the second mode, and performing data exchange with other algorithm containers through a shared memory area managed by a cross-container shared context management unit to realize collaborative perception.
Description
Robot vision edge computing system and computing method thereof Technical Field The invention relates to the technical field of artificial intelligence, in particular to a robot vision edge computing system and a computing method thereof. Background The robot vision system is a core component of intelligent equipment such as a mobile robot, an automatic guided vehicle and the like and is responsible for extracting, understanding and outputting environment information from an image or video stream. Along with the complicating of application scenes, the requirements of robots on visual systems show contradictions of dynamics, isomerism and diversification, namely, on one hand, a single visual algorithm (such as a visual odometer and obstacle detection) is required to monopolize computing resources during high-speed movement and obstacle avoidance navigation so as to ensure the determined low-delay real-time performance (such as delay <100 ms), and on the other hand, a plurality of visual algorithms (such as face recognition, behavior analysis and meter reading) are required to be simultaneously operated during fixed-point on duty and fine operation so as to maximize the utilization rate of computing resources and the scene understanding capability. At present, two types of calculation schemes for supporting the vision of the robot exist, but real-time performance, flexibility, efficiency and synergy are difficult to consider: the first is a special embedded vision system. Such systems employ a solid or semi-solid hardware and algorithm integration scheme to deploy a specific algorithm (e.g., a specific object recognition algorithm) directly on an embedded processor (e.g., ARM+FPGA). Its advantage is high instantaneity. However, the inherent defects severely restrict the intelligent development of the robot (1) the functions are stiff and poor in expansibility, the algorithm functions are fixed once leaving the factory, the algorithms can not be dynamically added or deleted or upgraded according to new tasks later, and if the new functions are added, hardware modules are usually required to be replaced or added, so that the cost is high and the system is bulky. (2) The resource utilization rate is low, namely most of the computing power of the hardware is idle when a single task is executed, and when multiple tasks are needed to be combined, the hardware cannot be realized due to the limitation of a framework or can be realized only by stacking multiple sets of hardware, so that huge waste of physical resources and energy sources is caused. (3) Algorithm deployment and optimization thresholds are high, namely the deployment of a deep learning model (such as a CNN-based detector) is seriously dependent on manual optimization, and a unified automatic tool chain is lacked. For example, it is difficult to conveniently quantify the FP32 model into the INT8 model to increase efficiency, resulting in developers often being able to employ only computationally intensive, energy inefficient models, limiting the upper performance limits of the system. The second category is a generic computing platform based on software containerization. With the popularity of containerization technologies (e.g., docker), some approaches have attempted to deploy and manage multiple vision algorithms through software containers on a robotic host computer (e.g., an industrial personal computer). The method solves the problems of algorithm environment isolation and dynamic deployment ("algorithm plug and play") and improves development flexibility. However, the fundamental defect is that the severe requirements of the robot on certainty and real-time cannot be met (1) the task switching delay is high, the container scheduling and the context switching of the software layer are managed by a general operating system (such as Linux), the cost is huge, and the cost is usually tens to hundreds of milliseconds, so that the algorithm switching response is slow. (2) The resource scheduling is uncertain, the scheduling strategy of the general operating system cannot guarantee the accurate execution time of visual tasks, the multiple tasks are easy to interfere with each other, and the real-time performance cannot be guaranteed. (3) Inter-container communication is typically based on networks or files, with high latency, and with difficulty achieving cross-container, low-latency, perceived data sharing. For example, one container detects a target at time t1 and the other container needs to track at time t2, and due to the lack of an efficient sharing mechanism, tracking containers often need to be re-detected, resulting in computational redundancy and time sequence information breaking, and failing to form continuous and consistent scene perception. The invention can be said to solve the core problem that the existing robot vision computing system has difficult to consider the functionality, the real-time performance, the flexibility and the efficien