CN-116225649-B - Fine-granularity electric power task cloud edge collaborative optimization scheduling method
Abstract
A cloud edge collaborative optimization scheduling method for fine-grained power tasks belongs to the technical field of edge computing of the power distribution Internet of things. The method is characterized by comprising the following steps of a step of arranging an edge computing network scene, a step of establishing a task model and determining task information, a step of c, a step of considering a micro-service processing model of service configuration, a step of d, calculating micro-service time delay and energy consumption, a step of e, establishing scheduling constraint and target number, a step of f, designing a task scheduling algorithm based on NSGA-II, a step of g, calculating weight based on fuzzy logic, and a step of h, sequencing all connections at the forefront. In the fine-granularity electric power task cloud edge collaborative optimization scheduling method, micro-service execution constraint, task queuing, service configuration, equipment resources and task attributes are comprehensively considered, so that task execution time and energy consumption are reduced, and the problems that in the prior art, single-edge equipment resources are limited, and local low-delay and low-energy consumption processing of all tasks is difficult to realize are solved.
Inventors
- CHEN YU
- CHENG QIAN
- SUN LINGYAN
- PENG KE
- WANG WEI
- WANG JINGHUA
Assignees
- 山东理工大学
Dates
- Publication Date
- 20260512
- Application Date
- 20230228
Claims (6)
- 1. A cloud edge collaborative optimization scheduling method for fine-grained power tasks is characterized by comprising the following steps: Step a, determining the quantity of edge devices and cloud ends, performing communication setting of the edge devices and the cloud ends, and performing container and device creation; step b, establishing a task model and determining task information; Step c, consider the micro-service processing model of the service configuration; step d, calculating micro service time delay and energy consumption; Step e, establishing scheduling constraint and target number; step f, designing a task scheduling algorithm based on NSGA-II; step g, calculating weight based on fuzzy logic; step h, sorting all solutions of the optimal front edge by adopting a method approaching to ideal solution sorting, calculating the score of each scheme, and selecting a scheduling scheme corresponding to the maximum value of the score as a scheduling scheme of a task; in step c, a map () function is defined to represent the correspondence of the microservices to the containers, for any And , Representing micro-services From containers Processing in which Is the set of all micro-services, Is a collection of all service containers, the running and deployment conditions of the containers in the device are dynamically changed, and the micro service Is allocated to the device There are three cases in the process: Case 1 apparatus Container configured with micro-service requirements If the container is handling other types of micro services, the micro services The container waiting queue needs to be entered for waiting to be executed, and if the container is idle, the micro service Can be performed immediately; case 2 device does not have a configuration Container And is also provided with The image file of the required container is required to be downloaded from the cloud and then put into operation to process the micro-service ; Case 3 apparatus Without provision for containers And is also provided with Then the containers need to be listed Replacing one of the containers and removing the replaced container from the list of containers And then add a newly configured container to handle micro services ; In the step d, the calculation of the micro service time delay comprises the execution time of the micro services, the data transmission time among the micro services and the waiting time of the micro services, and the calculation of the micro service energy consumption comprises the micro service execution energy consumption and the data transmission energy consumption.
- 2. The fine-grained power task cloud edge collaborative optimization scheduling method according to claim 1 is characterized in that in the step b, the method comprises the following steps: step b-1, fine-grained power tasks based on micro-services are modeled using directed acyclic graphs, expressed as Wherein: For a collection of nodes of the graph, a node represents a micro-service of task calls, For the set of directed edges of the graph, the directed edges represent the dependency relationship between the micro services, based on the dependency relationship between the two micro services, the micro service transmitting data is called a front-end micro service, the micro service receiving data is called a rear-end micro service, A set of data volumes for transmission between micro services; Step 2-2, task topology modification, namely, calling a micro-service without any front micro-service as an entrance micro-service, calling a micro-service without any rear micro-service as an exit micro-service, and adding a virtual entrance micro-service without occupying any time and resource With virtual egress microservices And two virtual micro services are not participated in scheduling, and the graph structure is modified as follows: ; Step 2-3, the set of power tasks is represented as Each task There are four basic information, expressed as , Wherein, the Representing tasks Is used for the time of arrival of (a), Representing tasks Is time delay, i.e. the task needs to be in The processing is completed within the duration of time, Representing tasks And the type value of the (c) is used for distinguishing the conventional task, the alarm task and the fault processing task.
- 3. The fine-grained power task cloud edge collaborative optimization scheduling method according to claim 1 is characterized by comprising the following steps of: step 8-1, blurring, namely determining membership degree of each variable through a triangle membership degree function; step 8-2, fuzzy reasoning, namely reasoning out corresponding fuzzy variables according to fuzzy rules, and using the fuzzy variables for defuzzification; And 8-3, defuzzifying, namely performing defuzzifying calculation on the inferred fuzzy variable by using a centroid method, wherein the calculation formula is as follows: Wherein, the In order to input or output a variable, Obtaining the time delay weight of the task by calculating the membership degree obtained in the step 8-1 , Is the energy consumption weight of the task.
- 4. The fine-grained power task cloud edge collaborative optimization scheduling method according to claim 3, wherein in step 8-1, membership of each variable is determined through a triangle membership function, and a calculation formula is as follows: Wherein, the In order to input or output a variable, , , For a given real number, dependent on the task parameters, and In the task information, the influence of time delay, task type value and input data volume on the time delay weight is obvious, so that the time delay weight of the task is taken as input and output, low, medium and high are taken as language variables of the time delay and the input data volume, conventional, alarm and fault are taken as language variables of the task type value, and very low, medium and high are taken as language variables of the time delay weight.
- 5. The fine-grained power task cloud edge collaborative optimization scheduling method according to claim 1, wherein the micro-service execution time is as follows: Wherein, the Is a device Middle container To meet the computational resource constraints, Representing execution of micro-services The number of CPU cycles required, , Representing micro services The amount of data that needs to be processed, i represents the number of CPU cycles required to process the unit data, For the number of containers running concurrently on the device, any The sum of the processing speeds of the individual containers does not exceed the processing speed of the apparatus, ; The data transmission time between two dependent microservices is as follows: wherein: to perform micro-services Is provided with a device number of (1), , Is a device And (3) with The data transmission rate between them, The data transmission time between two dependent microservices; Micro-services Is allocated to the device When executed, the latency of the micro-service is: Wherein, the Is a micro-service Is allocated to the device Is used as a starting point for the initial end time of (c), Is an apparatus Middle container Tail micro-services of waiting queues, consisting of tasks Call and When the container In the time of the idle period, the user can take the device, , Is a container The size of the image file is such that, Is a central cloud and equipment The data transmission rate between them, Is a device All containers wait for a set of tail micro services of the queue, For one microservice in this set, a task is defined by And (5) calling.
- 6. The fine-grained power task cloud edge collaborative optimization scheduling method according to claim 1, wherein the micro-service execution energy consumption is as follows: Wherein, the Is a device Is a coefficient of energy efficiency of the CPU, Is a device Middle container Is used for the treatment of the waste water, Representing execution of micro-services The number of CPU cycles required; the data transmission energy consumption comprises the data transmission energy consumption between the micro services with the dependency relationship: energy consumption for transmitting mirror image files: Wherein, the Number of devices per unit time Data transmission power of the device; the data transmission power of the cloud end in unit time is as follows, Is a device And (3) with The data transmission rate between them, Is a central cloud and equipment The data transmission rate between them, Is a container The size of the image file is such that, Is the data transfer time between two dependent microservices.
Description
Fine-granularity electric power task cloud edge collaborative optimization scheduling method Technical Field A cloud edge collaborative optimization scheduling method for fine-grained power tasks belongs to the technical field of edge computing of the power distribution Internet of things. Background Under the large background of the construction of a novel power system, the service types and the quantity are continuously increased by the aid of the novel application scenes such as user-side distributed power supply access, intelligent charging and discharging of electric vehicles and low-carbon buildings, so that cloud computing has many defects in real-time performance and energy consumption. The edge computing technology is an extension of cloud computing, computing capacity is migrated from a centralized cloud to edge equipment, rapid and nearby processing of an electric power task can be achieved, and service quality is improved. The traditional cloud platform is published through the virtual machine in a single application mode, each application is an integral, the application needs integral scheduling, the flexibility is poor, and the time and energy consumption cost is high. The method is aimed at the emerging service increment, and the edge equipment adopts a micro-service architecture and a container technology to realize the processing of different services by one physical equipment. The edge side service of the distribution network has complex structure, and the requirements of various services on the edge computing function and performance are greatly different, so that the service needs to rely on diversified computing capabilities such as real-time computing, control response, intelligent reasoning and the like of the edge side. The resources of a single side device are limited, only a limited number of services can be configured at the same time, and local low-delay and low-energy processing on all tasks is difficult to realize. Therefore, the technical scheme capable of utilizing the characteristic of independent operation among micro services to perform service configuration and task scheduling, fully utilizing cloud edge resources, reducing task time delay and system energy consumption and improving task completion rate is designed to be a problem to be solved in the field. Disclosure of Invention The invention aims to solve the technical problems of overcoming the defects of the prior art and providing a fine-grained electric power task cloud edge collaborative optimization scheduling method, wherein micro-service execution constraint, task queuing, service configuration, equipment resources and task attributes are comprehensively considered, and the task execution time and energy consumption are reduced. The technical scheme adopted by the invention for solving the technical problems is that the cloud edge collaborative optimization scheduling method for the fine-grained electric power tasks is characterized by comprising the following steps: Step a, determining the quantity of edge devices and cloud ends, performing communication setting of the edge devices and the cloud ends, and performing container and device creation; step b, establishing a task model and determining task information; Step c, consider the micro-service processing model of the service configuration; step d, calculating micro service time delay and energy consumption; Step e, establishing scheduling constraint and target number; step f, designing a task scheduling algorithm based on NSGA-II; step g, calculating weight based on fuzzy logic; And h, sequencing all solutions of the optimal front edge by adopting a sequencing method approaching to the ideal solution, calculating the score of each scheme, and selecting a scheduling scheme corresponding to the maximum score as a scheduling scheme of the task. Preferably, in step b, the following steps are included: Step b-1, the fine-grained power task based on micro-services is modeled using a directed acyclic graph, denoted as Q u={Au,Bu,Gu, wherein: For the collection of nodes of the graph, the nodes represent micro-services of task call, B u is a collection of directed edges of the graph, the directed edges represent dependency relationships among the micro-services, the micro-services transmitting data are called front micro-services based on the dependency relationships among the two micro-services, the micro-services receiving data are called rear micro-services, and G u is a collection of data volume transmitted among the micro-services; Step 2-2, task topology modification, namely, calling a micro-service without any front micro-service as an entrance micro-service, calling a micro-service without any rear micro-service as an exit micro-service, and adding a virtual entrance micro-service without occupying any time and resource With virtual egress microservicesAnd two virtual micro services are not participated in scheduling, and the graph structure is modified as