CN-121979657-A - Online computing and unloading method and system for edge computing equipment of graph neural network
Abstract
The invention discloses an on-line computing and unloading method and system for edge computing equipment of a graph neural network, wherein the method comprises the following steps of preprocessing input data and constructing a node characteristic matrix of the edge computing equipment; the method comprises the steps of constructing a graph convolution network model, carrying out convolution processing on graph matrix characteristic data to generate unloading probability, generating a plurality of candidate selection adaptive unloading decisions according to the unloading probability, evaluating the weighted calculation rate of each strategy, selecting an optimal unloading decision, and training and updating the model through experience playback according to the generated optimal unloading decision. According to the method and the device for processing the cloud computing, the computing tasks are unloaded to the network edge terminal, so that the problems of delay and energy consumption caused by cloud processing are effectively solved, and efficient utilization of various edge computing devices in the power grid field is realized.
Inventors
- HOU CONGYING
- SHU WEN
- CHENG CONG
- CHEN BIN
- HONG YAN
- ZHANG YONGQIAO
- WU MANJIN
Assignees
- 国电南瑞科技股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20251212
Claims (10)
- 1. An on-line computing and unloading method for edge computing equipment of a graph neural network is characterized by comprising the following steps: (1) Preprocessing input data to construct a node characteristic matrix of the edge computing equipment; (2) Constructing a GCN model, carrying out convolution processing on the graph matrix feature data to generate unloading probability, carrying out feature extraction by adopting a two-layer GCN structure by the GCN model, mapping node features to 64 dimensions by a first layer, mapping the features to 32 dimensions by a second layer, and using a ReLU as an activation function: Wherein, the In order to input the feature matrix, And Trainable weight matrices for the first layer and the second layer, respectively; finally, mapping the 32-dimensional characteristics into 1 dimension through a full connection layer at an output layer, and outputting the unloading probability of each user by using a Sigmoid function: Wherein, the For the output layer weight matrix, the Sigmoid activation function formula is: ; (3) Generating from unloading probabilities The candidate self-adaptive unloading decisions are evaluated, the weighted calculation rate of each strategy is evaluated, and the optimal unloading decision is selected; (4) And updating the GCN model through experience playback training according to the generated optimal unloading decision.
- 2. The method for unloading the edge computing device on-line computation of the graph neural network according to claim 1, wherein in the step (1), the input data is preprocessed to construct a node feature matrix of the edge computing device, and the method comprises the following steps: Computing device quantity by edge Setting respective channel gain values And calculating the correlation information among users based on the Gaussian similarity, and generating an adjacency matrix.
- 3. The on-line computation offload method of edge computing devices of a graph neural network of claim 2, wherein the adjacency matrix formula is: Wherein, the And calculating to obtain a symmetrical normalized adjacency matrix: Wherein, the As an adjacency matrix Is a degree matrix of (2).
- 4. The method for on-line computing and offloading an edge computing device of a neural network according to claim 1, wherein in the step (3), K candidate adaptive offloading decisions are generated according to offloading probabilities, where K is an adjustable parameter, and is at most n+1, and the weighted computing rate of each policy is evaluated, and selecting an optimal offloading decision includes: The first decision is obtained through direct binarization of a set threshold value, namely, the threshold value is greater than the threshold value and is unloaded to an edge server, and the threshold value is smaller than the threshold value and is calculated locally; the rest K-1 actions are to turn over the most uncertain device decisions in turn according to the distance sequence of the output values of the devices from the threshold value.
- 5. The on-line computing offload method of an edge computing device of a graph neural network of claim 4, wherein the K candidate adaptive offload decision steps are as follows: (1) Calculating the absolute distance between the unloading probability value of each device and the set threshold value; (2) Ascending order is carried out according to the absolute distance, and the most uncertain equipment is arranged in the front to obtain an index list; (3) The first K-1 most uncertain users respectively turn over the decisions of the users in turn to generate new candidate decisions; And calculating the corresponding weighted calculation rate after the K candidate decisions are obtained, and finally selecting the decision which can bring the highest calculation rate as the unloading decision of the current time frame through a binary estimation method.
- 6. The on-line computation offload method of an edge computing device of a graph neural network according to claim 1, wherein in the step (3), the weighted computation rate is: The total system time is divided into several consecutive time frames of length T, typically on the order of a few seconds, which in a static internet of things environment is short, assuming that the frame length is less than the channel coherence time, each time frame can be divided into two phases: the first stage is a downlink wireless energy transmission stage, which is to receive the broadcast radio frequency energy of the point aT the beginning of the time frame T, wherein the duration is aT, The device collects energy from the received radio frequency signal at this stage; Stage two, uplink task execution stage, residual time (1-a) T is used for task execution, and the equipment must select one of the following two modes according to binary unloading strategy ): (A) Local calculation (x=0) using the collected energy to calculate locally throughout (1-a) T duration; (b) The complete unloading (x=1) is that the whole task is unloaded to the receiving point, the device uses the collected energy to upload the task data to the receiving point for remote execution, the time required by the receiving point to calculate the task and download the result back to the device is negligible compared with T; let h denote the wireless channel gain between the receiving point and the device, the total energy collected by the device over a time frame is: Wherein, the Efficiency for the energy harvesting circuit; Assuming that the processor computing speed of the device is When the local computing mode is selected, the energy consumed to perform the computing task is Where k is the effective capacitance coefficient associated with the chip structure, taken The local calculation rate is: Let the transmission power of the device be Then there is According to shannon's formula, the transmission rate of the uplink data, namely the unloading rate is: Wherein, the For the purpose of integrating the bandwidth of the communication, Noise power for the receiver; when the acceptance point connects N devices, and N >1, the local computation rate is: Wherein, the For wireless channel gains between different devices and the receiving point, Effective capacitance coefficients for different devices; When the full unload mode is selected, set The time allocation factor unloaded for the ith device is then The unload rate can be deduced as: Assuming that only the radio channel gain h i is time-varying and the other system parameters are fixed within a time frame, the maximization weighting and calculation rate can be expressed as: Wherein w i is a weight; representing an offloading decision.
- 7. The online computing and offloading method of claim 1, wherein in step (4), updating the GCN model by empirical replay training according to the generated optimal offloading decision comprises: The model maintains a fixed size empirical playback storage area with capacity M, and at each time frame, the model pairs the current state decisions Store into a buffer zone, where For the channel gain between the corresponding device and the acceptance point, When the buffer is full, the new experience will cover the earliest stored experience; the model adopts a periodic training strategy, a small batch of experiences are randomly sampled from an experience playback buffer area for training every T time frames, the correlation among training samples is reduced through random sampling, the convergence speed is increased, and a binary cross entropy loss function is adopted in the training process.
- 8. An edge computing device online computing offload system for a graph neural network, comprising: The data preprocessing module is used for preprocessing input data and constructing a node characteristic matrix of the edge computing equipment; the graph convolution module is used for constructing a GCN model, carrying out convolution processing on the graph matrix characteristic data and generating unloading probability; The unloading decision generation module is used for generating K candidate self-adaptive unloading decisions according to the unloading probability, evaluating the weighted calculation rate of each strategy and selecting the optimal unloading decision; and the playback training module updates the GCN model through experience playback training according to the generated optimal unloading decision.
- 9. A computing device comprising one or more processors, one or more memories, and one or more programs stored in the memories and configured to be executed by the processors, the programs when loaded into the processors implementing the on-line computing offload method for an edge computing device of a graph neural network according to any of claims 1 to 7.
- 10. A storage medium storing a computer program comprising program instructions that when executed by a processor cause the processor to perform the method of on-line computing offload of edge computing devices of a graph neural network according to any one of claims 1 to 7.
Description
Online computing and unloading method and system for edge computing equipment of graph neural network Technical Field The invention relates to the technical field of edge computing, in particular to an online computing and unloading method and an online computing and unloading system for edge computing equipment of a graph neural network. Background As an emerging example, edge computing pushes computing and storage capabilities to network edges, and utilizes various heterogeneous network devices to integrate computing, storage, transmission and other resources in an edge network, so as to realize distributed deployment of application services and multipoint collaborative offloading of computing tasks, and provide convenient low-latency and high-bandwidth intelligent services for mobile users, portable devices and the like close to the network edge side. In the field of power grids, with the rapid development of technologies such as the internet of things, artificial intelligence and big data, edge computing is becoming an important computing architecture supporting these emerging technologies. The traditional cloud computing architecture is difficult to meet the requirements of real-time processing and low-delay response due to data transmission delay and bandwidth limitation, and particularly the importance of edge computing is increasingly prominent in scenes with high requirements on real-time and localized processing, such as intelligent power grid monitoring and management, distributed energy management, electric automobile charging management and the like. Meanwhile, the edge calculation can process sensitive data locally, so that potential safety hazards in the data transmission process are reduced, and the data privacy and safety are further improved. With the acceleration of digital transformation, edge computation has become an important component of the next generation information infrastructure. Along with the continuous increase of the low-delay and high-reliability computing demands of the terminal equipment of the Internet of things, the mobile edge computing effectively relieves the problems of delay and energy consumption caused by cloud processing by offloading computing tasks to the network edge. Traditional mobile edge computing networks rely on Deep Neural Networks (DNNs) to generate offloading decisions, but deep neural networks DNNs have difficulty in capturing spatial relationships between users, do not fully utilize topological associations between users, and how to efficiently perform computation offloading decisions and resource allocation in a multi-user dynamic environment is also a significant challenge. Disclosure of Invention The invention aims to provide an online computing and unloading method and system for edge computing equipment, which help mobile edge computing equipment to improve information processing efficiency and optimize equipment performance graph neural network. The on-line computing and unloading method for the edge computing equipment of the graph neural network comprises the following steps: (1) Preprocessing input data to construct a node characteristic matrix of the edge computing equipment; (2) Constructing a GCN model, carrying out convolution processing on the graph matrix feature data to generate unloading probability, carrying out feature extraction by adopting a two-layer GCN structure by the GCN model, mapping node features to 64 dimensions by a first layer, mapping the features to 32 dimensions by a second layer, and using a ReLU as an activation function: Wherein, the In order to input the feature matrix,AndTrainable weight matrices for the first layer and the second layer, respectively; finally, mapping the 32-dimensional characteristics into 1 dimension through a full connection layer at an output layer, and outputting the unloading probability of each user by using a Sigmoid function: Wherein, the For the output layer weight matrix, the Sigmoid activation function formula is: ; (3) Generating from unloading probabilities The candidate self-adaptive unloading decisions are evaluated, the weighted calculation rate of each strategy is evaluated, and the optimal unloading decision is selected; (4) And updating the GCN model through experience playback training according to the generated optimal unloading decision. Further, in the step (1), preprocessing the input data to construct a node feature matrix of the edge computing device, including: Computing device quantity by edge Setting respective channel gain valuesAnd calculating the correlation information among users based on the Gaussian similarity, and generating an adjacency matrix. Further, the adjacency matrix formula is: Wherein, the And calculating to obtain a symmetrical normalized adjacency matrix: Wherein, the As an adjacency matrixIs a degree matrix of (2). Further, in the step (3), K candidate adaptive unloading decisions are generated according to the unloading probability, wher