Search

CN-122019194-A - Heterogeneous resource management method of underwater acoustic signal processor containerized platform

CN122019194ACN 122019194 ACN122019194 ACN 122019194ACN-122019194-A

Abstract

The invention discloses a heterogeneous resource management method of a containerized platform of an underwater sound signal processor, which belongs to the technical field of containerized platforms of the underwater sound signal processor, and comprises the steps of constructing a unified capacity characterization resource pool, and quantifying heterogeneous computing unit hardware architecture and underwater sound reference algorithm performance; analyzing underwater sound tasks to generate an acoustic processing logic diagram containing nodes and dependency relations, carrying out matching mapping according to the algorithm types and real-time constraints of all the nodes in the diagram and combining the performance of a resource pool and the load state to generate a task-resource mapping scheme, instantiating a processing container according to the task-resource mapping scheme, connecting the container according to data flow through a software defined network and configuring and storing the data flow to form a distributed computing pipeline, and monitoring the performance and triggering dynamic rescheduling and reconstruction in operation. The invention realizes the refined scheduling of heterogeneous resources, the intelligent deployment and the self-adaptive optimization of the task pipeline, and improves the efficiency, the real-time guarantee and the resource utilization rate of the underwater acoustic signal processing.

Inventors

  • XU JIALIN
  • SU SHUAI
  • Chen Shuaike
  • LIU SHUYANG

Assignees

  • 中国舰船研究设计中心

Dates

Publication Date
20260512
Application Date
20260414

Claims (7)

  1. 1. The heterogeneous resource management method of the underwater acoustic signal processor containerized platform is characterized by comprising the following steps of: S1, constructing a unified capacity characterization resource pool for all heterogeneous computing units managed by a containerized platform, and labeling a basic hardware architecture, supported instruction set types, carried special accelerating equipment types and quantity and performance expression data of each heterogeneous computing unit under a preset underwater sound signal processing reference computing subset; S2, receiving an underwater sound signal processing task, analyzing a task description file of the underwater sound signal processing task to obtain an acoustic processing logic diagram of the underwater sound signal processing task, wherein the acoustic processing logic diagram comprises a plurality of processing nodes and a data flow direction and processing stage dependency relationship among the processing nodes, and each processing node corresponds to an underwater sound signal processing core algorithm; S3, performing matching mapping with the real-time load state and performance data of each heterogeneous computing unit in the capacity characterization resource pool based on the algorithm type, real-time constraint and data throughput requirement of each processing node in the acoustic processing logic diagram, and scheduling and distributing the matched heterogeneous computing units for each processing node to generate a task-resource mapping scheme; S4, instantiating corresponding processing containers on the distributed heterogeneous computing units according to a task-resource mapping scheme, connecting the processing containers according to the data flow direction of the acoustic processing logic diagram through a software defined network technology, and configuring corresponding storage access paths for the processing containers to form a physical distributed computing pipeline; And S5, continuously monitoring the resource utilization rate and the data processing delay of each processing container in the operation process of the computing pipeline, and triggering the dynamic rescheduling and reconstruction of the pipeline according to the state of the current capacity characterization resource pool when the performance bottleneck is monitored.
  2. 2. The heterogeneous resource management method of the underwater acoustic processor containerization platform of claim 1, wherein the step S1 is characterized by constructing a unified capacity characterization resource pool, and specifically comprises the following steps: S11, collecting basic hardware architecture data of each heterogeneous computing unit contained in each computing node through equipment probes deployed on each computing node of a platform, wherein the basic hardware architecture data comprise CPU instruction set architectures, operating system types, memory capacity, types, video memories or on-board memory capacity and driving version information of GPU (graphics processing unit), FPGA (field programmable gate array) or NPU (non-programmable gate array) special acceleration equipment; S12, when the heterogeneous computing unit is accessed for the first time or hardware configuration is changed, driving the heterogeneous computing unit to run a standardized underwater sound signal processing benchmark test program package, wherein the program package comprises beam forming, pulse compression, matched filtering and constant false alarm detection algorithms, and recording the average processing time delay, peak throughput and processing efficiency under unit energy consumption of each algorithm on the heterogeneous computing unit to generate performance data; and S13, storing the basic hardware architecture data and the performance data in a correlated manner to form a capability image of each heterogeneous computing unit, and aggregating all capability images to construct a capability representation resource pool.
  3. 3. The heterogeneous resource management method of the underwater acoustic processor containerization platform of claim 1, wherein the S2 specifically comprises: s21, receiving an underwater sound signal processing task submitted in a form of a structured description file, wherein the structured description file comprises a task processing flow, related algorithms and data interfaces; s22, analyzing the structured description file, identifying and extracting a plurality of atomic operations forming a task core computing flow, mapping and packaging each atomic operation into an independent processing node; S23, establishing a directional data flow direction and processing stage dependency relationship between processing nodes according to the execution sequence and the data transfer relationship defined in the structural description file, and generating a preliminary topological structure; s24, for each processing node, based on the packaged atomic operation, matching and marking the specific type of the corresponding underwater sound signal processing core algorithm from a preset underwater sound algorithm library; s25, marking real-time constraint and data throughput requirements for the processing nodes and the data flow according to the performance statement in the structural description file or a preset algorithm type-constraint rule library; s26, generating an acoustic processing logic diagram which can be analyzed by a subsequent scheduling step based on the marked processing nodes, the data and the dependency relationship between the marked processing nodes and the algorithm type and constraint.
  4. 4. The heterogeneous resource management method of the underwater acoustic processor containerization platform of claim 2, wherein the matching mapping in the S3 specifically comprises the following steps: s31, for each processing node in the acoustic processing logic diagram, determining a required core computing operation according to the type of the acoustic processing core algorithm, and inquiring a corresponding preferred hardware type and an instruction set from a preset mapping rule base according to the computing operation; s32, according to real-time constraint of the processing node, selecting heterogeneous computing units with current load rate lower than a first threshold value and meeting the requirement of a preferred hardware type from the capacity characterization resource pool to form a first candidate set; S33, for each heterogeneous computing unit in the first candidate set, calculating a benchmark test performance index corresponding to the current processing node algorithm in the performance data, and predicting the expected completion time of the current processing node by combining the current waiting task queue length of the heterogeneous computing unit; And S34, selecting an allocation combination which minimizes the completion time of the whole acoustic processing logic diagram, mapping the allocation combination to different computing nodes in the same computing node or with network delay lower than a second threshold value for adjacent processing nodes with data dependency relations, and generating a task-resource mapping scheme.
  5. 5. The heterogeneous resource management method of the underwater sound signal processor containerization platform of claim 1, wherein the step S4 of instantiating the processing container, connecting the container and configuring the storage access path according to the task-resource mapping scheme comprises the following steps: S41, acquiring a heterogeneous computing unit identifier, a required container mirror image identifier and a starting configuration parameter of the heterogeneous computing unit identifier distributed by each processing node according to the task-resource mapping scheme generated in the S3; S42, pulling corresponding container mirror images on each heterogeneous computing unit designated by the mapping scheme, instantiating the processing containers, and injecting algorithm identification, data input and output endpoint information and resource allowance parameters of corresponding processing nodes into each processing container; s43, establishing a virtual network link between each processing container through a software defined network controller according to the data flow direction defined in the acoustic processing logic diagram, and configuring the link bandwidth, the transmission protocol and the service quality strategy to ensure that the data transmission meets the real-time constraint; S44, mounting and configuring a corresponding storage access path for each processing container, wherein the storage access path comprises a read-only access path for configuring original underwater sound data for a data input node container, a read-write path for configuring temporary data storage for an intermediate processing node container, and a persistent storage path for configuring result data for a final output node container; And S45, after the instantiation, network connection and storage configuration of all the processing containers are completed, sequentially starting the containers according to the topological sequence of the acoustic processing logic diagram to form a physically distributed and logically coherent computing pipeline, and feeding back the ready state of the pipeline.
  6. 6. The heterogeneous resource management method of the underwater acoustic processor containerized platform of claim 1, wherein the triggering pipeline dynamic rescheduling and reconstructing in S5 specifically comprises the following steps: s51, when the fact that the data processing delay of the processing container continuously exceeds the preset delay threshold value is monitored, judging that the processing container is a performance bottleneck container; S52, analyzing the resource use condition of the performance bottleneck container, and judging that the performance bottleneck container is a calculation resource bottleneck if the CPU or special acceleration equipment utilization rate of the calculation unit is continuously higher than a third threshold value; S53, if the calculation resource bottleneck is calculated, searching a standby calculation unit which is matched with a bottleneck container algorithm and has the current load rate lower than a fourth threshold value in a capacity characterization resource pool; s54, creating a new processing container instance on the standby computing unit, and migrating necessary context states from the original bottleneck container to the new container through a data synchronization mechanism; s55, switching the data stream from the original bottleneck container to a new container, and gradually stopping the original bottleneck container to finish the pipeline reconstruction.
  7. 7. The heterogeneous resource management method of the underwater sound signal processor containerization platform of claim 4, further comprising preemptive scheduling and resource reservation of underwater sound processing tasks with deadline constraints: before S3, generating a task-resource mapping scheme, identifying a processing node set marked as a key real-time path in an acoustic processing logic diagram, wherein the end-to-end processing delay of the set must meet a preset deadline; For a processing node included in the key real-time path, a heterogeneous computing unit and network bandwidth resources meeting the performance and time constraint of the heterogeneous computing unit are pre-locked in a capacity characterization resource pool to form a resource reservation area; when a preset non-critical task request collides with reserved resources, the scheduler preferentially guarantees the allocation of the resource reserved area and executes interruptible preemption or queuing on the collided non-critical task.

Description

Heterogeneous resource management method of underwater acoustic signal processor containerized platform Technical Field The invention belongs to the technical field of underwater acoustic signal processor containerization platforms, and particularly relates to a heterogeneous resource management method of an underwater acoustic signal processor containerization platform. Background The underwater acoustic signal processing is a key technology in the fields of ocean observation, underwater communication, target detection and identification and the like. With the improvement of task complexity (such as broadband signal processing, multi-objective tracking, deep learning enhancement detection and the like), the processing algorithm is increasingly complex, and the calculation requirement is exponentially increased. The traditional special underwater sound signal processor is generally based on a fixed hardware architecture (such as a DSP array, a special ASIC or a single-model GPU), and has the inherent defects of system stiffness, difficult algorithm updating, low resource utilization rate, long development and deployment period and the like. In recent years, the advantages of environment consistency, resource isolation and elastic expansion of the containerization and cloud native technology are utilized, so that a new deployment paradigm is provided for computationally intensive applications. The task of processing underwater acoustic signals is containerized and runs on a heterogeneous computing platform formed by a general purpose CPU, GPU, FPGA, NPU and the like, and is regarded as an important direction for improving the flexibility, the utilization rate and the iteration speed of the system. However, the existing generic containerized platforms (such as systems based on Kubernetes and ecology thereof) and resource management methods thereof have significant shortcomings in dealing with the specific field of underwater acoustic signal processing, mainly manifested in the following aspects: The heterogeneous hardware computing power characteristic perception and quantification is insufficient, the existing platform can only abstract manage the CPU core number and the memory size, and the special acceleration equipment such as GPU, FPGA and the like can only be used as simple equipment number for distribution, and the refined description of the equipment computing power characteristic is lacking. Different core algorithms (such as FFT, beam forming and convolutional neural network) in underwater sound processing have huge differences in performance and power consumption on different architecture hardware. The existing method lacks a set of benchmark test and quantization system oriented to underwater sound calculation characteristics, cannot provide accurate data support of what algorithm runs better on what hardware for intelligent scheduling, causes rough resource allocation, and is difficult to exert the best energy efficiency of heterogeneous hardware. The task scheduling and heterogeneous resource matching has low intelligentization degree, the underwater sound processing task is usually presented as a multi-stage data flow pipeline, the algorithm of each stage is different, and the requirements on real-time performance and throughput are different. Most of the existing container scheduling strategies are based on simple resource requests (such as 2 GPUs), and cannot understand data flow diagrams, processing stage dependency relationships and real-time constraints of each stage in a task. The scheduling process often ignores network delays caused by data exchanges between computing nodes, and lacks an optimized mechanism to schedule processing stages with tight data dependencies to neighboring nodes to reduce communication overhead. This easily results in prolonged overall processing delay of the task, and fails to meet the strict end-to-end delay requirements of the underwater sound system (especially sonar real-time processing). The lack of adaptive reconstruction capability for dynamic underwater acoustic processing environments, which have dynamic time-varying characteristics, can lead to dramatic changes in processing load due to fluctuations in input data rate, target density, and channel conditions. The existing platform is based on elastic expansion and contraction triggered by static or threshold, has lag reaction, is generally limited to increase and decrease of the number of copies, and cannot perform fine-grained dynamic rescheduling and pipeline reconstruction of a cross-heterogeneous computing unit aiming at a performance bottleneck link in task operation. When a certain processing stage becomes a bottleneck due to hardware performance degradation or load surge, the overall pipeline performance is compromised and the system lacks the ability to self-heal online and maintain performance. In summary, the existing general containerized platform resource management method fails to deeply in