US-12627573-B2 - Server and agent for reporting of computational results during an iterative learning process
Abstract
There is provided mechanisms for performing an iterative learning process with agent entities. A method is performed by a server entity. The method includes selecting a linear mapping to be used by the agent entities when reporting computational results of a computational task to the server entity. The linear mapping has an inverse. The linear mapping is selected as a function of an estimated interference level for a radio propagation channel over which the agent entities are to report the computational results of the computational task to the server entity. The method includes configuring the agent entities with the computational task. The method includes performing the iterative learning process with the agent entities until a termination criterion is met. The server entity as part of performing the iterative learning process applies the inverse of the linear mapping to a sum of the computational results.
Inventors
- Reza Moosavi
- Henrik Rydén
- Erik G. Larsson
Assignees
- TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
Dates
- Publication Date
- 20260512
- Application Date
- 20211116
Claims (16)
- 1 . A method for performing a federated learning process with agent entities integrated within a user equipment (UE) for signal quality drop prediction within a communication network, the method being performed by a server entity, the method comprising: selecting a linear mapping M to be used by the agent entities when reporting computational results of a computational task to the server entity, wherein the linear mapping M has an inverse M −1 , wherein the linear mapping M is a function of the estimated interference level per resource element, and wherein the linear mapping M is selected as a function of an estimated interference level for a radio propagation channel over which the agent entities report the computational results of the computational task to the server entity; configuring the agent entities with the computational task; and performing the federated learning process with the agent entities until a termination criterion is met, wherein the computational results are phase-rotated and transmitted using linear analog modulation to achieve aggregation of the computational results over-the-air, wherein the server entity as part of performing the federated learning process applies the inverse M −1 of the linear mapping M to a sum of the aggregated computational results received over-the-air from the agent entities, wherein the aggregated computational results requires lesser resource elements for transmission.
- 2 . The method according to claim 1 , wherein the computational results per iteration are sent in frames composed of resource elements, wherein the interference level is estimated per each resource element in the frame, and wherein the resource elements are time/frequency samples or define a spatial mode.
- 3 . The method according to claim 1 , wherein the linear mapping M defines in which of the resource elements in the frame the agent entities are to send the computational results and as well as in which of the resource elements in the frame the agent entities are not to send the computational results.
- 4 . The method according to claim 1 , wherein the interference level is estimated during a period free from scheduled transmissions from the agent entities.
- 5 . The method according to claim 1 , wherein the interference level is estimated based on the computational results received from the agent entities.
- 6 . The method according to claim 1 , wherein the method further comprises: configuring the agent entities with the linear mapping M over a channel established between the server entity and the agent entities.
- 7 . The method according to claim 1 , wherein the linear mapping M is represented by a matrix M k having an inverse M k − 1 , and wherein the computational results received from the agent entities are multiplied with the inverse M k −1 of the matrix M k when the inverse M −1 of the linear mapping M is applied to the computational results.
- 8 . The method according to claim 7 , wherein the matrix M k is a permutation matrix or an orthonormal matrix.
- 9 . The method according to claim 1 , wherein the method further comprises: configuring the agent entities to repetitively send the computational results in at least two repetitions, where mutually different linear mapping M are to be applied to the computational results for each of the at least two repetitions.
- 10 . The method according to claim 1 , wherein the linear mapping M is the same for all the agent entities.
- 11 . The method according to claim 1 , wherein performing on iteration of the federated learning process comprises: providing a parameter vector of the computational task to the agent entities; receiving the computational results as a function of the parameter vector from the agent entities; obtaining inverse transformed computational results by applying the inverse M −1 to the sum of the aggregated computational results; and updating the parameter vector as a function of an aggregate of the inverse transformed computational results.
- 12 . The method according to claim 1 , wherein the method further comprises: updating the linear mapping M for a next iteration of the federated learning process, whereby the linear mapping M as updated has a new inverse.
- 13 . The method according to claim 1 , wherein the method further comprises: providing an indication to the agent entities as to whether or not to apply the linear mapping M when reporting the computational results of the computational task to the server entity, wherein whether or not the agent entities apply the linear mapping M is determined based on performance feedback obtained from the agent entities.
- 14 . A method for performing a federated learning process with a server entity, the method being performed by an agent entity integrated within a user equipment (UE) for signal quality drop prediction within a communication network, the method comprising: obtaining a linear mapping M to be used by the agent entity when reporting computational results of a computational task to the server entity; obtaining configuration in terms of the computational task from the server entity; and performing the federated learning process with the server entity until a termination criterion is met, wherein the computational results are phase-rotated and transmitted using linear analog modulation to achieve aggregation of the computational results over-the-air requiring lesser resource elements, wherein the agent entity as part of performing the federated learning process applies the linear mapping M to the computational results before sending the computational results to the server entity.
- 15 . The method according to claim 14 , wherein the computational results per iteration are sent in frames composed of resource elements, and wherein the linear mapping M defines in which of the resource elements in the frame the agent entity is to send the computational results and as well as in which of the resource elements in the frame the agent entity is not to send the computational results.
- 16 . A server entity for performing a federated learning process with agent entities integrated within a user equipment (UE) for signal quality drop prediction within a communication network, the server entity comprising processing circuitry, the processing circuitry being configured to cause the server entity to: select a linear mapping M to be used by the agent entities when reporting computational results of a computational task to the server entity, wherein the linear mapping M has an inverse M −1 , and wherein the linear mapping M is selected as a function of an estimated interference level for a radio propagation channel over which the agent entities report the computational results of the computational task to the server entity; configure the agent entities with the computational task; and perform the federated learning process with the agent entities until a termination criterion is met, wherein the computational results are phase-rotated and transmitted using linear analog modulation to achieve aggregation of the computational results over-the-air, wherein the server entity as part of performing the federated learning process applies the inverse M −1 of the linear mapping M to a sum of the aggregated computational results received over-the-air from the agent entities, wherein the aggregated computational results requires lesser resource elements for transmission.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a 35 U.S.C. § 371 national stage application for International Application No. PCT/EP2021/081773, entitled “SERVER AND AGENT FOR REPORTING OF COMPUTATIONAL RESULTS DURING AN ITERATIVE LEARNING PROCESS,” filed on Nov. 16, 2021, the disclosure and content of which is hereby incorporated by reference in its entirety. TECHNICAL FIELD Embodiments presented herein relate to a method, a server entity, a computer program, and a computer program product for performing an iterative learning process with agent entities. Embodiments presented herein further relate to a method, an agent entity, a computer program, and a computer program product for performing an iterative learning process with a server entity. BACKGROUND The increasing concerns for data privacy have motivated the consideration of collaborative machine learning systems with decentralized data where pieces of training data are stored and processed locally by edge user devices, such as user equipment. Federated learning (FL) is one non-limiting example of a decentralized learning topology, where multiple (possible very large number of) agents, for example implemented in user equipment, participate in training a shared global learning model by exchanging model updates with a centralized parameter server (PS), for example implemented in a network node. FL is an iterative process where each global iteration, often referred to as communication round, is divided into three phases: In a first phase the PS broadcasts the current model parameter vector to all participating agents. In a second phase each of the agents performs one or several steps of a stochastic gradient descent (SGD) procedure on its own training data based on the current model parameter vector and obtains a model update. In a third phase the model updates from all agents are sent to the PS, which aggregates the received model updates and updates the parameter vector for the next iteration based on the model updates according to some aggregation rule. The first phase is then entered again but with the updated parameter vector as the current model parameter vector. A common baseline scheme in FL is named Federated SGD, where in each local iteration, only one step of SGD is performed at each participating agent, and the model updates contain the gradient information. A natural extension is so-called Federated Averaging, where the model updates from the agents contain the updated parameter vector after performing their local iterations. Analog modulation as used for the transmission of the model updates with over-the-air computation is susceptible to interference. For example, if a particular resource element (RE) on which a model update is transmitted, say the n:th RE, is impacted by interference, this affects all model updates from all the agents to the nth component of the model vector. This is true since all agents transmit their n:th component of the model update in the same RE. This can be mitigated by using FL with dedicated agent-to-PS channels. Communication over dedicated agent-to-PS channels can be achieved by using digital modulation and coding. The impact of interference on particular REs can then be averaged out since no two agents will use the same RE and since modulation and coding as such reduces the impact of interference. Using dedicated agent-to-PS channels the communication latency and overhead. Using dedicated agent-to-PS channels thus comes at a cost of an increased need for network resources and computational resources at both the PS and the agents. There could thus be scenarios where using communication over dedicated agent-to-PS channels for the transmission of the model updates is unfeasible and should be avoided. SUMMARY An object of embodiments herein is to address the above issues in order to enable efficient communication between the agents (hereinafter denoted agent entities) and the PS (hereinafter denoted server entity) in scenarios impacted by interference, without resorting using communication over dedicated agent-to-PS channels. According to a first aspect there is presented a method for performing an iterative learning process with agent entities. The method is performed by a server entity. The method comprises selecting a linear mapping M to be used by the agent entities when reporting computational results of a computational task to the server entity. The linear mapping M has an inverse M−1. The linear mapping M is selected as a function of an estimated interference level for a radio propagation channel over which the agent entities are to report the computational results of the computational task to the server entity. The method comprises configuring the agent entities with the computational task. The method comprises performing the iterative learning process with the agent entities until a termination criterion is met. The server entity as part of performing the iterative learning process applies th