CN-122018886-A - Simulink code generation method and device based on instruction pipeline perception
Abstract
The application provides a Simulink code generation method and device based on instruction pipeline perception, and relates to the technical field of embedded code generation, wherein the method comprises the steps of carrying out instruction dependency analysis and delay modeling on a target Simulink model to generate a dependency graph describing the dependency relationship among modules and an instruction delay model combining with the architecture parameters of a target processor; determining an optimal execution sequence of each module through pipeline perception scheduling based on the dependency graph and the instruction delay model, and carrying out pipeline scheduling based on the optimal execution sequence to obtain a scheduling result; based on the scheduling results, object code is generated using a local pipeline optimization strategy. The method provided by the application utilizes topology analysis to determine the data dependency path among the modules, and adaptively selects the execution sequence through the minimum punishment priority algorithm so as to avoid unnecessary waiting and resource conflict, thereby effectively reducing the quiescence times of the instruction pipeline on the premise of ensuring the semantic consistency of the model.
Inventors
- YU ZEHONG
- SU ZHUO
- Pu Jinxiao
- SHI DALONG
- JIANG YU
Assignees
- 清华大学
- 中航国际金网(北京)科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251224
Claims (10)
- 1. The Simulink code generation method based on instruction pipeline perception is characterized by comprising the following steps of: Performing instruction dependency analysis and delay modeling on the target Simulink model to generate a dependency graph describing the dependency relationship among the modules and an instruction delay model combining the architecture parameters of the target processor; Determining an optimal execution sequence of each module through pipeline perception scheduling based on the dependency graph and the instruction delay model, and carrying out pipeline scheduling based on the optimal execution sequence to obtain a scheduling result, wherein the pipeline perception scheduling is predicted through a dynamic priority strategy and delay conflict so as to minimize pipeline pause; and generating target codes by utilizing a local pipeline optimization strategy based on the scheduling result.
- 2. The method of claim 1, wherein the local pipeline optimization strategy comprises at least one of reordering instructions without breaking dependency constraints, renaming registers with spurious dependencies, inserting independent operations in delay slots of branch instructions, and fusing consecutive independent short instructions into complex instructions.
- 3. The method of claim 1, wherein performing instruction dependency analysis and delay modeling on the target Simulink model to generate a dependency graph describing dependency relationships between modules and an instruction delay model incorporating target processor architecture parameters, comprises: carrying out static structure analysis on the target Simulink model, and constructing a dependency graph which takes a module of the model as a node and takes a data stream and a control stream as directed edges; establishing an instruction delay model for each module in the dependency graph based on the hardware characteristic information of the target processor; wherein the hardware characteristic information comprises at least one of pipeline stages, register access delay, memory access delay, and execution unit number.
- 4. The method of claim 1, wherein determining an optimal execution sequence for each module by pipeline-aware scheduling based on the dependency graph and the instruction delay model comprises: Extracting a current executable module set from the dependency graph, and calculating a comprehensive priority for each module in the set, wherein the comprehensive priority is calculated based on at least one of depth of the module in the dependency graph, overlapping property of instruction delay and influence degree of module output on a subsequent critical path; and determining the optimal execution sequence of each module based on the comprehensive priority of each module.
- 5. The method according to claim 1 or 4, wherein the performing pipeline scheduling based on the optimal execution sequence to obtain a scheduling result includes: Selecting a module with the highest comprehensive priority to transmit and schedule according to the comprehensive priority of each module and the conflict prediction result of the instruction delay model corresponding to each module; Before the module transmits, if the conflict prediction result corresponding to the module indicates that the possibility of pause exists, the module transmits is delayed, and the next module which can be immediately executed is selected for transmitting, so that the continuous operation of the pipeline is maintained.
- 6. The method of claim 5, wherein selecting the module with the highest integrated priority for transmission scheduling comprises: If the conflict prediction result indicates that partial dependence exists between the modules and the modules are not completely blocked, an idle compensation instruction is inserted into the scheduling sequence to fill a potential idle period, so that resource waste caused by structural pause is reduced; If the conflict prediction result indicates that a conflict which cannot be eliminated exists, a scheduling back-off mechanism is triggered to recalculate the optimal execution sequence.
- 7. The method of claim 1, wherein generating object code using a local pipeline optimization strategy based on the scheduling result comprises: generating object codes corresponding to each module according to the execution sequence determined by the scheduling result; register allocation is performed in the code generation process, and a synchronous instruction is inserted according to the dependency information so as to ensure that an execution result is correct.
- 8. The method of claim 1, wherein after generating object code using a local pipeline optimization strategy based on the scheduling result, the method further comprises: Analyzing the generated target code, and obtaining a performance index according to an analysis result, wherein the performance index comprises at least one of pipeline stall rate, average execution cycle number and instruction throughput rate; and if the performance index of the target code does not reach the preset threshold, adjusting the parameters of the instruction delay model and/or the dynamic priority strategy based on the analysis result, and generating a new round of optimization code until the performance index of the target code reaches the preset threshold.
- 9. A Simulink code generation apparatus based on instruction pipeline awareness, the apparatus comprising: the delay model construction module is used for carrying out instruction dependency analysis and delay modeling on the target Simulink model to generate a dependency graph describing the dependency relationship among the modules and an instruction delay model combining the architecture parameters of the target processor; The pipeline perception scheduling module is used for determining an optimal execution sequence of each module through pipeline perception scheduling based on the dependency graph and the instruction delay model, and carrying out pipeline scheduling based on the optimal execution sequence to obtain a scheduling result; And the code generation module is used for generating target codes by utilizing a local pipeline optimization strategy based on the scheduling result.
- 10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the Simulink code generation method based on instruction pipeline awareness of any one of claims 1 to 8 when executing the program.
Description
Simulink code generation method and device based on instruction pipeline perception Technical Field The application relates to the technical field of embedded code generation, in particular to a Simulink code generation method and device based on instruction pipeline perception. Background Model-DRIVEN DESIGN, MDD for short, is one of the core methods in the development of contemporary complex embedded systems. The basic idea is to use the model as the center of system design and implementation, and realize early verification and optimization of system functions and performances by establishing an abstract system model and performing simulation and verification. In model driven design systems, simulink is one of the most widely used tools at present. The Simulink provides an intuitive graphical modeling interface, and supports engineers to establish a control algorithm and a signal processing system in a mode of dragging the module and the signal line. In the related art, code generators such as Simulink Embedded Coder typically generate code strictly according to data dependencies among modules in a model. However, although this process ensures that the generated code is semantically consistent with the model, it also causes a serialization feature in instruction execution, i.e., the generated code generally requires that the instruction of the next module can start executing after the computation of the previous module is completely finished, resulting in a large number of instructions not being scheduled in parallel, and further causing serious pipeline blocking during the processor execution phase. Disclosure of Invention The application aims to provide a Simulink code generation method and device based on instruction pipeline perception, which utilize topology analysis to determine data dependency paths among modules, and adaptively select an execution sequence through a minimum punishment priority algorithm so as to avoid unnecessary waiting and resource conflict, thereby effectively reducing the pause times of an instruction pipeline on the premise of ensuring the semantic consistency of a model. The application provides a Simulink code generation method based on instruction pipeline perception, which comprises the following steps: The method comprises the steps of carrying out instruction dependency analysis and delay modeling on a target Simulink model, generating a dependency graph describing dependency relationship among modules and an instruction delay model combining target processor architecture parameters, determining an optimal execution sequence of each module through pipeline perception scheduling based on the dependency graph and the instruction delay model, carrying out pipeline scheduling based on the optimal execution sequence to obtain a scheduling result, carrying out dynamic priority strategy and delay conflict prediction on the pipeline perception scheduling to minimize pipeline pause, and generating target codes by utilizing a local pipeline optimization strategy based on the scheduling result. Optionally, the local pipeline optimization strategy comprises at least one of carrying out instruction rearrangement without destroying dependency constraint, renaming a register with false dependency, inserting independent operation in a delay slot of a branch instruction, and fusing continuous independent short instructions into a compound instruction. The method comprises the steps of carrying out static structure analysis on a target Simulink model, constructing a dependency graph taking a model module as a node and taking a data flow and a control flow as directed edges, and establishing an instruction delay model for each module in the dependency graph based on hardware characteristic information of the target processor, wherein the hardware characteristic information comprises at least one of pipeline stages, register access delay, access delay and execution unit number. The method comprises the steps of determining an optimal execution sequence of each module through pipeline perception scheduling based on the dependency graph and the instruction delay model, wherein the method comprises the steps of extracting a current executable module set from the dependency graph, and calculating a comprehensive priority for each module in the set, wherein the comprehensive priority is calculated based on at least one of depth of the module in the dependency graph, overlapping of instruction delay, influence degree of the module on a subsequent critical path, and determining the optimal execution sequence of each module based on the comprehensive priority of each module. Optionally, the method comprises the steps of carrying out pipeline scheduling based on the optimal execution sequence to obtain a scheduling result, wherein the step of selecting a module with the highest comprehensive priority to carry out transmission scheduling according to the comprehensive priority of each module and a c