CN-121984878-A - Federal learning client selection method based on Lyapunov optimization under mobile edge computing network
Abstract
The invention discloses a Federal learning client selection method based on Lyapunov optimization under a mobile edge computing network, which comprises the steps of 1) initializing a global model, 2) computing the comprehensive utility of clients by an edge server, 3) judging whether the upper limit of the number of the selected clients is reached, if so, jumping to step 5), otherwise, entering step 4), adding the client with the biggest marginal profit in greedy selection step 4) into a subset of the clients, returning to step 3), 5) participating the selected subset of the clients in training, and 6) executing the steps locally And 7) performing global model aggregation by the server to output a global model. The invention realizes the self-adaptive client selection strategy under the long-term constraint of energy consumption and delay, and ensures the long-term stability and convergence efficiency of the system.
Inventors
- LI XIUHUA
- ZHANG YUTIAN
- CHEN WU
- HAO JINLONG
- CAI CHUNMAO
- Wei yaxing
- HOU XIN
- JIA BO
- TAN YINYING
Assignees
- 重庆大学
Dates
- Publication Date
- 20260505
- Application Date
- 20251217
Claims (10)
- 1. A Federal learning client selection method based on Lyapunov optimization in a mobile edge computing network is characterized by comprising the following steps: step 1) determining a mobile edge computing network architecture and initializing a global model The mobile edge computing network architecture includes an edge server and A mobile client; step 2), the server receives indexes such as data quality, gradient quality, energy consumption, time delay and the like reported by all clients, calculates the comprehensive utility of each client, and is used for selecting a t-th round of clients; Step 3) judging whether the upper limit of the number of the client terminal selections is reached, if so, jumping to step 5), otherwise, executing step 4); step 4) based on the comprehensive utility of the clients, greedy selecting the client with the largest marginal benefit to join in the client terminal set Returning to the step 3); step 5) the server models the current global state Broadcast to the final selected subset of clients ; Step 6) client-side participating in t-th round training Executing locally Iterating the random gradient descent to update the local parameters, and uploading the locally updated model parameters to a server; Step 7), the server executes global model aggregation to obtain a next round of global model; step 8) judging whether the stopping condition is reached, if not, making Returning to the step 2), if yes, outputting the global model.
- 2. The method for selecting a federal learning client based on Lyapunov optimization in a mobile edge computing network of claim 1, wherein the global model is used to perform tasks requested by the mobile client in the mobile edge computing network, including but not limited to image recognition, speech recognition, object detection, behavior recognition, anomaly detection, or time series data prediction.
- 3. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network of claim 1, wherein the set of clients in the mobile edge computing network architecture is denoted as Each client side Holding a local data set The number of samples is recorded as 。
- 4. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network of claim 1, wherein in step 6), the first Local parameters after multiple iteration update The following is shown: (1) Wherein the method comprises the steps of In order for the rate of learning to be high, Is a client Is used for the local loss function of (c), Loss for single sample; Is the local parameter before updating; And (2) and 。
- 5. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network according to claim 1, wherein in step 2), the delay index reported to the edge server by the client includes a local computing delay and an uploading delay; Wherein the local computation delays And upload delay The following are respectively shown: (2) (3) (4) In the formula, The number of CPU cycles required for one iteration of a single sample, Is a client Is a CPU frequency of (2); Is the effective capacitance coefficient; Is the transmitting power; The model parameters to be uploaded are obtained; is the upload transmission rate; Is a client Proceeding with A computation delay of the secondary local iteration; the energy consumption reported to the edge server by the client comprises local calculation energy consumption and communication energy consumption; wherein, the energy consumption is calculated locally Communication energy consumption The following are respectively shown: (5) (6) In the formula, The number of CPU cycles required for one iteration of a single sample.
- 6. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network of claim 1, wherein in step 2), the client integrates utility The following is shown: (7) In the formula, Is the weight; ; Wherein each client side Data quality of (2) And gradient mass The following are respectively shown: (8) (9) In the formula, Is a non-negative weight; distribution similarity ; Is a global category distribution; 、 A sample size score, a tag diversity score; ; Distributing the category of the client; ; A reference global gradient for the edge server; Is a client In the first place Gradient reported by the wheel; Representing the gradient norm difference ratio.
- 7. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network according to claim 1, wherein in step 4), a client with maximum greedy choice marginal benefit is added to a client subset The method comprises the following steps: Step 4.1) construction of the long term constraints And (3) with The virtual queue is converted into a virtual queue with the requirement on the stability of the queue, namely: (10) (11) Wherein, the And The long-term average constraint upper bound of time delay and energy consumption are respectively; Step 4.2) constructing an approximately optimal selection target for each round, namely: (12) In the formula, Comprehensive utility for client, client set Is not less than a threshold ; To balance parameters, select client set Total energy consumption of (2) ; Step 4.3) constructing the selected set of clients using a greedy approximation to handle delay coupling and reduce complexity, the steps comprising: Step 4.3.1) having the currently selected client aggregate The current maximum delay of the selected set is noted as The accumulated aggregate energy consumption is initialized corresponding to the delay ; Step 4.3.2) for each candidate customer that is not selected Based on the approximate optimal selection target of each round, the current queue state Lower calculation of marginal benefit The method comprises the following steps: (13) Wherein the method comprises the steps of If it is to Delay contribution of the client after joining the collection; step 4.3.3) selecting the lead Maximum client If (if) Will then Adding in And update ; Step 4.3.4) repeating steps 4.3.2) -4.3.3) until the upper client selectable number limit is reached.
- 8. The method for federal learning client selection based on lyapunov optimization in a mobile edge computing network of claim 7, wherein the step of constructing the near optimal selection target for each round comprises: Step 4.2.1) constructing a lyapunov function, namely: (14) In the formula, Is a lyapunov state vector; Step 4.2.2) defines single slot lyapunov drift, i.e.: (15) Wherein E is the desired utility; Is single-slot Lyapunov drift; Step 4.2.3) constructing single-slot Lyapunov drift constraint according to a standard drift analysis method, namely: (16) wherein B is a constant; step 4.2.4) introducing trade-off parameters Will be expected to have utility And integrating the drift upper bound to obtain a drift adding punishment form of the target, namely: (17) Wherein the method comprises the steps of Is the first The set of clients in the round of selection, Overall utility for the client; step 4.2.5) minimizing the upper equivalence bound of the drift plus penalty term described above, thereby constructing a near optimal selection target per round.
- 9. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network of claim 7, wherein the next round of global model is as follows: In the formula, And the local parameters are reported to the client.
- 10. The method for selecting a federal learning client based on lyapunov optimization in a mobile edge computing network of claim 7, further comprising the step of using the current round of observations after obtaining a next round of global models Updating virtual queues 。
Description
Federal learning client selection method based on Lyapunov optimization under mobile edge computing network Technical Field The invention relates to the technical field of mobile edge calculation and federal learning, in particular to a federal learning client selection method based on Lyapunov optimization under a mobile edge calculation network. Background With the rapid development of internet of things and next generation communication technologies, data generated by mobile devices (e.g., smartphones, smartwatches) is experiencing an explosive growth. Mobile edge computing has become a key paradigm for processing large amounts of distributed data and implementing low latency intelligent applications, which places computing resources and data processing capabilities closer to user devices. Federal learning is a distributed machine learning paradigm that is integrated into a mobile edge computing network, allowing multiple mobile clients to collaboratively train a global model while preserving the privacy of their local data. Specifically, each client first trains its local model. The updated local model parameters are then uploaded to a server and iteratively aggregated to form a global model, so that the user data privacy is better protected. Despite its great potential for deploying federal learning in mobile edge computing networks, it still faces heterogeneous challenges from both system and data dimensions. In one aspect, the mobile clients' central processor dominant frequencies and network conditions vary, resulting in heterogeneous computing and transmission capabilities. Mobile clients with limited computing power or lower transmission rates may cause a lag effect, thereby extending the time per round of aggregation. On the other hand, data heterogeneity is characterized by a high imbalance in the local data set size, and non-independent co-distribution of data samples, which can skew the model aggregation and slow down convergence. In order to cope with the above problems, researchers have recently proposed various client selection and resource scheduling strategies for selecting the most representative client to participate in global model update in each round of training. For example, classical FedAvg algorithm adopts a mode of randomly selecting clients, but fails to fully consider the system and data isomerism, and partial research introduces device computing power or data importance indexes in client selection, but often ignores the trade-off between energy consumption and delay constraint. Therefore, designing an efficient client selection framework is critical to alleviating the above problems. Disclosure of Invention The invention aims to provide a Federal learning client selection method based on Lyapunov optimization under a mobile edge computing network, which comprises the following steps: step 1) determining a mobile edge computing network architecture and initializing a global model The mobile edge computing network architecture includes an edge server andA mobile client; step 2), the server receives indexes such as data quality, gradient quality, energy consumption, time delay and the like reported by all clients, calculates the comprehensive utility of each client, and is used for selecting a t-th round of clients; Step 3) judging whether the upper limit of the number of the client terminal selections is reached, if so, jumping to step 5), otherwise, executing step 4); step 4) based on the comprehensive utility of the clients, greedy selecting the client with the largest marginal benefit to join in the client terminal set Returning to the step 3); step 5) the server models the current global state Broadcast to the final selected subset of clients; Step 6) client-side participating in t-th round trainingExecuting locallyIterating the random gradient descent to update the local parameters, and uploading the locally updated model parameters to a server; Step 7), the server executes global model aggregation to obtain a next round of global model; step 8) judging whether the stopping condition is reached, if not, making Returning to the step 2), if yes, outputting the global model. Further, the global model is used to perform tasks requested by the mobile client in the mobile edge computing network, including but not limited to image recognition, speech recognition, object detection, behavior recognition, anomaly detection, or time series data prediction. Further, in the mobile edge computing network architecture, the set of clients is denoted as. Each clientHolding a local data setThe number of samples is recorded as。 Further, in step 6), the firstLocal parameters after multiple iteration updateThe following is shown: (1) Wherein the method comprises the steps of In order for the rate of learning to be high,Is a clientIs used for the local loss function of (c),Loss for single sample; Is the local parameter before updating; And (2) and 。 Further, in step 2), the delay index reported