Search

CN-121808707-B - Multi-algorithm conflict elimination fusion method and device based on Bayesian neural network

CN121808707BCN 121808707 BCN121808707 BCN 121808707BCN-121808707-B

Abstract

The application relates to the technical field of self-adaptive fusion solution, and discloses a multi-algorithm conflict elimination fusion method and device based on a Bayesian neural network, wherein the method predicts the performance distribution of different algorithms under given task characteristics through a pre-trained Bayesian neural network model, and calculating dynamic weights according to the performance distribution, and further dynamically distributing calculation power, time and parallel proportion according to the dynamic weights, so that the optimal algorithm combination is automatically selected without manual intervention. The method can automatically adjust the solving strategy under the environment that the task type is frequently changed, and effectively reduces misjudgment caused by experience dependence and fixed threshold.

Inventors

  • ZHANG XIAOXIONG
  • YANG JUN
  • FAN QIANG
  • ZHANG KE
  • HUANG SHAN
  • LI ZIKUAN
  • DING KUN

Assignees

  • 中国人民解放军国防科技大学

Dates

Publication Date
20260508
Application Date
20260309

Claims (8)

  1. 1. A multi-algorithm conflict elimination fusion method based on a Bayesian neural network is characterized by comprising the following steps: Performing feature extraction and problem modeling according to a processor resource allocation conflict elimination problem in a decision system to be solved to obtain standardized features, wherein the processor resource allocation conflict elimination problem in the decision system comprises allocation decisions between a plurality of computing tasks and a processor to eliminate processor resource use conflicts; inputting the standardized characteristics into a main model of a pre-trained Bayesian neural network to obtain performance distribution of different algorithms under the conflict elimination problem; Calculating dynamic weights of different algorithms according to the performance distribution; Inputting the dynamic weight into a pre-trained hot-start sub-model of the Bayesian neural network, and carrying out multi-algorithm collaborative solving and initialization guiding to obtain calculation power, time and parallel proportion of each algorithm when running in a decision system; Determining an optimal algorithm combination according to the calculation power, time and parallel proportion of each algorithm so as to eliminate the resource allocation conflict of the processor in the decision system; the calculating the dynamic weights of different algorithms according to the performance distribution comprises the following steps: To a desired effect For input, the process-level resource weights of the integer linear programming algorithm, the genetic algorithm and the backtracking and greedy heuristic algorithm are obtained by using the softmax function as follows: ; in the formula, Representing an exponential function based on a natural constant e, acting on the expected performance of algorithm a , Representing an exponential function based on a natural constant e, acting on the expected performance of algorithm B , 、 Index or code number representing different algorithms in the algorithm set { ILP, GA, BT }, Representing an integer number of linear programming algorithms, The expression "genetic algorithm" is used, Representing a backtracking and greedy heuristic algorithm; The process-level resource weight is used for distributing CPU time, parallel threads or allowed iteration times; based on the uncertainty, the weights for result fusion are defined as follows: ; in the formula, Is a constant value, and the constant value is a constant value, 、 Representing the standard deviation or uncertainty of the predicted performance of algorithm a and algorithm B respectively, Is a positive number; The introduction of the exponential sliding average is as follows: ; in the formula, The smoothing factor of the exponential moving average is in the value range of (0, 1), the closer lambda is to 1, the larger the duty ratio of the historical weight is, the smoother the current weight change is, Representing process-level resource weights of the algorithm A in the t-th time period; setting a bottom protection value 。
  2. 2. The bayesian neural network-based multi-algorithm conflict elimination fusion method according to claim 1, wherein the feature extraction and problem modeling are performed according to a processor resource allocation conflict elimination problem in a decision system to be solved, so as to obtain a normalized feature, and the method comprises the following steps: The processor resource allocation conflict elimination problem in the decision system to be solved is expressed as follows: target set Each target With scheme sets Scheme for the production of a semiconductor device The required resource set is The efficacy is ; There are resource upper limit constraints as follows: ; Wherein, the Representation scheme Whether the resource k is to be used or not, As an upper inventory limit for resource k, E {0,1} indicates whether or not to select the scheme; Extracting scene features, and forming feature vectors as follows: ; Wherein S represents the problem scale, R represents the constraint sparsity, SCAR represents the resource scarcity, DIV represents the diversity index of the candidate solution, GAP represents the difference between the current best feasible solution and the theoretical upper bound, FR represents the proportion of solutions which can satisfy all constraints in the random sampling or heuristic initial solution generation stage, CS represents the conflict density between different target schemes, Representing the number of targets; And carrying out standardization processing on the feature vector to obtain standardized features.
  3. 3. The bayesian neural network based multi-algorithm collision elimination fusion method according to claim 1, wherein the inputting the normalized features into a master model of a pre-trained bayesian neural network, before obtaining performance distribution of different algorithms under the collision elimination problem, the method further comprises: The main model of the Bayesian neural network is trained, and the specific steps are as follows: Collecting a historical task set as training samples, wherein each sample contains input characteristics Xi and algorithm performance labels Wherein: ; in the formula, Representing the performance that an integer linear programming algorithm can achieve in solving the historical task, Indicating the effectiveness that genetic algorithms can achieve in solving this historical task, The efficiency that the retrospective and greedy heuristic algorithm can achieve when solving the historical task is represented; setting the maximum evidence lower bound as an optimization target, and meeting the following relation: Wherein, the For the weight a priori, As a result of the weight posterior, Representation of posterior with respect to weights Is a function of the mathematical expectation of (a), Representing Kullback-Leibler divergence for weighting posterior And weight prior The difference between the two is that, Representing the probability; Based on the training sample and the optimization target, the Bayesian neural network is trained in a main model, and the Bayesian neural network model simultaneously outputs expected efficiency which can be achieved in a given time budget in an reasoning stage Confidence of desired efficacy The method is used for subsequent weight calculation and initialization guidance.
  4. 4. The bayesian neural network-based multi-algorithm conflict elimination fusion method according to claim 1, wherein the inputting the normalized features into the master model of the pre-trained bayesian neural network obtains performance distribution of different algorithms under the conflict elimination problem, comprising: Inputting the standardized task feature X into a main model of a pre-trained Bayesian neural network, and outputting the prediction results of different algorithms under the conflict elimination problem to be solved, wherein the prediction results are as follows: Wherein, the To achieve the desired performance within a given time budget, For this desired degree of confidence, A normal distribution is indicated and the distribution is determined, Representing an integer number of linear programming algorithms, The expression "genetic algorithm" is used, Representing a backtracking and greedy heuristic algorithm; If the confidence is insufficient, i.e < Τ, τ is an empirical threshold, the desired efficacy is calculated as follows: in the formula, As the weight coefficient of the light-emitting diode, A complementary evaluation function based on the current task feature X and related to the performance of the algorithm A; and fusing the existing trigger index with the predicted result to obtain final performance distribution.
  5. 5. The method for multi-algorithm collision elimination and fusion based on Bayesian neural network as set forth in claim 1, wherein the training data of the hot-start sub-model of the Bayesian neural network is task feature X and optimal initialization parameters of corresponding algorithm in historical operation ; The optimization targets are as follows: in the formula, Representing posterior distribution with respect to weights Is a function of the mathematical expectation of (a), For the task feature X and corresponding algorithm to optimize the initialization parameters in historical operation, For the predicted optimal initialization parameter(s), Is a smoothing factor of an exponential sliding average, Representing Kullback-Leibler divergence for weighting posterior And weight prior Differences between; The sub-model structure is trained independently for each algorithm; The online updating step is that the actual convergence track and the prediction difference are recorded after operation, and the parameters of the hot-start sub-model are adjusted in a fine-tuning mode periodically.
  6. 6. The bayesian neural network-based multi-algorithm conflict elimination fusion method according to claim 1, wherein the inputting the dynamic weight into a pre-trained hot-start sub-model of the bayesian neural network performs multi-algorithm collaborative solving and initializing guidance to obtain calculation power, time and parallel proportion of each algorithm when running in a decision system, and the method comprises the following steps: The genetic algorithm module adopting the trained Bayesian neural network model outputs initial population distribution parameters The sampling device is used for guiding the sampling to generate a high-fitness individual; Backtracking and greedy heuristic algorithm module adopting trained Bayesian neural network model to output variable priority sequence Pruning threshold distribution ; Integer linear programming algorithm module for predicting tight constraint set probability by adopting trained Bayesian neural network model Selecting an initial relaxation set according to the probability of the tight constraint set; Running a genetic algorithm, a backtracking and greedy heuristic algorithm and an integer linear programming algorithm in parallel, wherein resources are distributed according to process-level resource weights; if the prediction result shows that the superiority of a certain algorithm exceeds a set threshold, a set cascading mode is adopted, wherein the set cascading mode comprises a mode from a genetic algorithm to an integer linear programming algorithm for refinement, and a mode from a backtracking and greedy heuristic algorithm to an integer linear programming algorithm for verification; wherein, the parallel running period dynamic feedback is as follows: ; in the formula, Indicating the result of the run-time feedback, Representing the efficacy value of algorithm a at time t, Representing the historical optimum performance values obtained in all algorithms up to time t-1, Representing the time interval, i.e. the length of time from time t-1 to time t; According to And adjusting the process-level resource weight of the next period in real time.
  7. 7. The bayesian neural network based multi-algorithm collision elimination fusion method according to claim 1, wherein the determining an optimal algorithm combination according to the calculation power, time and parallel proportion of each algorithm comprises: calculating a multi-objective composite score from the output of each algorithm as follows: Wherein, the To evaluate the metrics, including efficacy, balance, robustness, timeliness, interpretability, Represents the kth evaluation index The weight coefficient occupied in the comprehensive score; and weighting calculation is carried out according to the result weight, as follows: in the formula, For the result to be fused with the weights, Is the comprehensive score of the algorithm A; And selecting the solution with the highest comprehensive score as a final scheme to obtain the optimal algorithm combination.
  8. 8. A bayesian neural network-based multi-algorithm collision resolution fusion device, comprising: The system comprises a feature extraction and problem modeling module, a feature extraction and problem modeling module and a feature analysis module, wherein the feature extraction and problem modeling module is used for carrying out feature extraction and problem modeling according to a processor resource allocation conflict elimination problem in a decision system to be solved to obtain standardized features; The performance distribution prediction module is used for inputting the standardized characteristics into a main model of a pre-trained Bayesian neural network to obtain performance distribution of different algorithms under the conflict elimination problem; The dynamic weight calculation module is used for calculating the dynamic weights of different algorithms according to the performance distribution; The collaborative solving module is used for inputting the dynamic weight into a hot-start sub-model of the Bayesian neural network trained in advance, carrying out multi-algorithm collaborative solving and initialization guiding, and obtaining calculation force, time and parallel proportion of each algorithm when the algorithm runs in the decision system; the determining module is used for determining an optimal algorithm combination according to the calculation power, time and parallel proportion of each algorithm so as to eliminate the resource allocation conflict of the processor in the decision system; the calculating the dynamic weights of different algorithms according to the performance distribution comprises the following steps: To a desired effect For input, the process-level resource weights of the integer linear programming algorithm, the genetic algorithm and the backtracking and greedy heuristic algorithm are obtained by using the softmax function as follows: ; in the formula, Representing an exponential function based on a natural constant e, acting on the expected performance of algorithm a , Representing an exponential function based on a natural constant e, acting on the expected performance of algorithm B , 、 Index or code number representing different algorithms in the algorithm set { ILP, GA, BT }, Representing an integer number of linear programming algorithms, The expression "genetic algorithm" is used, Representing a backtracking and greedy heuristic algorithm; The process-level resource weight is used for distributing CPU time, parallel threads or allowed iteration times; based on the uncertainty, the weights for result fusion are defined as follows: ; in the formula, Is a constant value, and the constant value is a constant value, 、 Representing the standard deviation or uncertainty of the predicted performance of algorithm a and algorithm B respectively, Is a positive number; The introduction of the exponential sliding average is as follows: ; in the formula, The smoothing factor of the exponential moving average is in the value range of (0, 1), the closer lambda is to 1, the larger the duty ratio of the historical weight is, the smoother the current weight change is, Representing process-level resource weights of the algorithm A in the t-th time period; setting a bottom protection value 。

Description

Multi-algorithm conflict elimination fusion method and device based on Bayesian neural network Technical Field The application relates to the technical field of self-adaptive fusion solution, in particular to a method and a device for eliminating and fusing multi-algorithm conflict based on a Bayesian neural network. Background In an intelligent optimization and autonomous decision system, the problem of multi-objective conflict elimination is widely existing in the fields of resource allocation, task scheduling, path planning, cooperative control and the like. The core task is to coordinate resource allocation and task execution under a plurality of constraint conditions so as to eliminate conflict and realize overall efficiency maximization. Because such problems typically include discrete variables, nonlinear constraints, multi-objective coupling, and resource scarcity, they are typically NP-hard problems with conventional algorithms that compromise solution speed, optimality, and stability. The current mainstream method comprises integer linear Programming (INTEGER LINEAR Programming, ILP), genetic algorithm (Genetic Algorithm, GA) and backtracking and greedy heuristic algorithm (Backtracking Algorithm, BT), and for Integer Linear Programming (ILP), global optimal solution is obtained by establishing an accurate linear model, so that the method has the advantages of verifiable solution and complete theory, but in a scene with large-scale tasks or complex constraint, calculation time grows exponentially, and real-time solving failure is easy to cause. For Genetic Algorithm (GA), global search is realized through population evolution, and the method is suitable for nonlinear and high-dimensional problems, but has strong parameter sensitivity, obvious early convergence problem and random initialization of solution stability. The method has good interpretability and low calculation cost for backtracking and greedy heuristic algorithms, is suitable for quickly obtaining feasible solutions, is limited by heuristic strategies, is easy to fall into local optimum, and has insufficient overall efficiency. To integrate the advantages of each algorithm, researchers have proposed a multi-algorithm fusion framework. The layering cooperation of ILP, GA and backtracking algorithm is generally adopted to realize three-layer complementation of 'precise-global-heuristic'. However, the traditional fusion system still has various defects, and is characterized in that on one hand, algorithm suitability cannot be automatically learned, and the existing system usually selects an algorithm by means of preset rules (such as the number of targets, the number of schemes and constraint sparsity). Such "manual triggered" decisions do not reflect the nonlinear relationship between features. For example, when the constraint density is high and the scarcity varies, the performance of different algorithms may alternate, and a fixed threshold may not dynamically reflect this trend, resulting in an algorithm selection lag or error. The system can not learn by itself according to the historical expression, and can not predict the optimal strategy under different scenes, and the adaptability is obviously insufficient. On the other hand, the weight scheduling lacks dynamic feedback, and the traditional fusion method distributes fixed weight or time budget before the algorithm runs, and does not consider the actual contribution change of the algorithm in running. If an algorithm is better in early stage performance but slow in later stage convergence, the system still allocates excessive resources, so that calculation waste is caused. The lack of online feedback based on performance gain per unit time makes the fusion process lack of closed loop adjustment capability, and the overall convergence speed and the result quality are both affected. The fusion process is split, and the information sharing is insufficient, so that most multi-algorithm systems operate in independent modules, and data among algorithms are not interacted with intermediate results. For example, GA-generated high quality individuals cannot guide variable boundary optimization of ILP, nor can they be used in retrospective pruning strategies. As a result, each algorithm repeatedly explores similar areas, and the resource utilization rate is low. The lack of a unified information interaction layer makes the fusion structure simply parallel or cascaded, rather than truly synergistic optimization. There is also a lack of uncertainty modeling and risk perception in that existing fusion frameworks only measure algorithm performance with a single index (e.g., average performance), ignoring algorithm output volatility and stability. In practical application, the algorithm performance is obviously affected by random initialization, data disturbance, constraint change and the like. If uncertainty is not quantified, the system is difficult to judge the reliability of the current resu