Search

US-12621680-B2 - Coordinating management of a plurality of cells in a cellular communication network

US12621680B2US 12621680 B2US12621680 B2US 12621680B2US-12621680-B2

Abstract

A computer implemented method is disclosed for coordinating management of a plurality of cells in a cellular communication network, wherein each of the plurality of cells is managed by a respective Agent. The method comprises assembling a candidate set of cells, wherein each cell in the candidate set is awaiting execution of an action selected for the cell by its managing Agent. The method further comprises selecting a cell from the candidate set, adding the selected cell to an execution set of cells, and removing, from the candidate set of cells, the selected cell and all cells identified in a topology graph of the plurality of cells as fulfilling an interference condition with respect to the selected cell. The method further comprises, if a scheduling condition is fulfilled, initiating, for all cells in the execution set of cells, execution of the actions selected for the cells by their managing agents.

Inventors

  • Jaeseong Jeong
  • Ezeddin AL HAKIM
  • Jose OUTES CARNERO
  • Adriano MENDO MATEO
  • Alexandros Nikou

Assignees

  • TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)

Dates

Publication Date
20260505
Application Date
20210421

Claims (20)

  1. 1 . A computer implemented method for coordinating management of a plurality of cells in a cellular communication network, wherein each of the plurality of cells is managed by a respective Agent, the method, performed by a controller node, comprising: assembling a candidate set of cells, wherein each cell in the candidate set is awaiting execution of an action selected for the cell by its managing Agent; selecting a cell from the candidate set; adding the selected cell to an execution set of cells; removing, from the candidate set of cells, the selected cell and all cells identified in a topology graph of the plurality of cells as fulfilling an interference condition with respect to the selected cell; and if a scheduling condition is fulfilled, initiating, for all cells in the execution set of cells, execution of the actions selected for the cells by their managing agents.
  2. 2 . The method as claimed in claim 1 , wherein a cell fulfills an interference with respect to the selected cell if: at least one of performance or operation of the cell can be impacted by an operational configuration of the selected cell.
  3. 3 . The method as claimed in claim 1 , wherein a cell fulfills an interference with respect to the selected cell if: at least one of training or execution of a management policy for the cell can be impacted by an operational configuration of the selected cell.
  4. 4 . The method as claimed in claim 1 , wherein the method is for coordinating management of the plurality of cells in the cellular communication network such that performance of the plurality of cells is optimized with respect to at least one network level performance parameter.
  5. 5 . The method as claimed in claim 1 , wherein the scheduling condition comprises: all cells having been removed from the candidate set.
  6. 6 . The method as claimed in claim 1 , wherein an Agent comprises at least one of a physical or virtual entity that is operable to implement a management policy for the selection of actions to be executed in a cell on the basis of an observation of the cell.
  7. 7 . The method as claimed in claim 1 , wherein an action selected for a cell by its managing Agent comprises at least one of: a configuration for a communication network node serving the cell; a configuration for a communication network operation carried out by a communication network node serving the cell; a configuration for an operation performed by a wireless device in relation to a communication network node serving the cell.
  8. 8 . The method as claimed in claim 1 , wherein an action selected for a cell by its managing Agent comprises an action whose execution is operable to impact at least one of performance or operation of another cell in the plurality of cells.
  9. 9 . The method as claimed in claim 1 , further comprising: if a scheduling condition is not fulfilled, repeating the steps of: selecting a cell from the candidate set; adding the selected cell to the execution set of cells; removing, from the candidate set of cells, the selected cell and all cells identified in a topology graph of the plurality of cells as fulfilling an interference condition with respect to the selected cell.
  10. 10 . The method as claimed in claim 1 , further comprising: generating the topology graph of the plurality of cells, wherein the topology graph represents neighbor relations between cells; and for each cell in the topology graph, identifying all other cells in the topology graph that fulfill the interference condition with respect to the cell.
  11. 11 . The method as claimed in claim 10 , further comprising: setting the interference condition based on a measure of success used by Agents for evaluating selected actions.
  12. 12 . The method as claimed in claim 1 , wherein the topology graph comprises only cells operating on the same carrier frequency.
  13. 13 . The method as claimed in claim 1 , wherein selecting a cell from the candidate set comprises randomly selecting a cell from the candidate set.
  14. 14 . The method as claimed in claim 1 , wherein selecting a cell from the candidate set comprises using a cyclic selection process to select a cell from the candidate set.
  15. 15 . The method as claimed in claim 1 , further comprising: assigning a weight to each cell in the candidate set, and wherein selecting a cell from the candidate set comprises selecting a cell based on the assigned weights of cells in the candidate set.
  16. 16 . The method as claimed in claim 15 , wherein the Agents comprise Reinforcement Learning, RL, Agents, and wherein assigning a weight to each cell in the candidate set comprises, for a cell: obtaining a current state-action value from the RL Agent managing the cell; and setting as the weight of the cell at least one of: the obtained state-action value; the sum of all obtained state-action values in a virtual queue assigned to the cell, wherein the virtual queue contains state-action values for each time step of the RL Agent since an action was last executed in the cell.
  17. 17 . The method as claimed in claim 16 , further comprising, if the weight of each cell comprises the sum of all obtained state-action values in a virtual queue assigned to the cell, and following initiation of actions selected for cells in the execution set of cells by their managing agents: emptying the virtual queues of the cells for which actions have been initiated.
  18. 18 . The method as claimed in claim 1 , wherein selecting a cell from the candidate set comprises: assembling an exploration set of cells, wherein each cell in the exploration set is a member of the candidate set and wherein the action selected for each cell in the exploration set comprises an exploration of the state-action space for the cell; and while an exploration condition is satisfied, randomly selecting a cell from the exploration set; adding the selected cell to the execution set of cells; and removing, from the candidate set and from the exploration set, the selected cell and all cells identified in the topology graph of the plurality of cells as fulfilling an interference condition with respect to the selected cell.
  19. 19 . The method as claimed in claim 18 , wherein selecting a cell from the candidate set further comprises: when an exploration condition is not satisfied, selecting a cell from the candidate set according to at least one of: random selection; a cyclic selection process; a weight assigned to each cell in the candidate set.
  20. 20 . The method as claimed in claim 18 , wherein the exploration condition comprises: the exploration set comprising at least one cell; and the percentage of the plurality of cells that have been added to the execution set is below a threshold value.

Description

This application is a 35 U.S.C. § 371 national phase filing of International Application No. PCT/EP2021/060378, filed Apr. 21, 2021, the disclosure of which is incorporated herein by reference in its entirety. TECHNICAL FIELD The present disclosure relates to a computer implemented method for coordinating management of a plurality of cells in a cellular communication network, each of the plurality of cells being managed by a respective Agent. The method is performed by a controller node, and the present disclosure also relates to a controller node and to a computer program product configured, when run on a computer, to carry out a method for coordinating management of a plurality of cells in a cellular communication network. BACKGROUND Cellular communication networks are complex systems in which each cell of the network has its own set of configurable parameters. Some of these parameters only impact the performance of the cell to which they are applied. Improving the performance of an individual cell by changing a value of a parameter that only impacts that cell will always translate into an improvement in the global performance of the network, and it is consequently relatively straightforward to determine an optimum value for such parameters. A significant number of configurable cell parameters do not fall into this category however, and a change in value of these parameters not only impacts the performance of the cell to which they are applied, but also the performance of neighboring cells. For such parameters, improving the performance of a cell via adjusting parameter value may result in degrading performance in surrounding cells, and could lead to degrading the global performance of the network. Determining an optimum value for this kind of parameter is one of the most challenging tasks when optimizing cellular networks. Examples of parameters that can impact performance of neighboring cells include: Remote Electrical Tilt (RET): defines the antenna tilt of the cell, and can be adjusted remotely to improve the Downlink (DL) Signal to Interference plus Noise Ratio (SINR) in the cell concerned. Such adjustments can however degrade the SINR of surrounding cells.P0 Nominal Physical Uplink Shared Channel (PUSCH): defines the target power per resource block (RB) which the cell expects in the Uplink (UL) communication, from the User Equipment (UE) to the Base Station (BS). Increasing P0 Nominal PUSCH can increase the UL SINR in the cell under modification (thanks to signal increase) but may also decrease the UL SINR in the surrounding cells (owing to increased interference). The above examples illustrate a tradeoff between performance of a cell under modification and performance of its surrounding cells. Improving the overall performance of a cellular communication network implies managing this tradeoff to optimize global performance measures. The tradeoff between target and surrounding cell performance is difficult to estimate and varies on a case by case basis; the problem of optimizing global network performance by modifying parameters on a per-cell basis is considered as NP-hard in computational complexity theory. Artificial Intelligence (AI) is expected to play an important role in network parameter optimization in cellular networks. One promising AI technology is Reinforcement Learning (RL), in which agents learn a management policy from past experiences with the aim of optimizing a certain reward. During the learning, each agent 1) explores possible actions, 2) observes consequences of the explored actions including next state entered by the controlled system and a defined reward signal, and then 3) updates its policy with the aim of maximizing future reward. RL techniques have been explored for use in optimizing cell antenna tilt, with agents taking actions comprising increasing, decreasing, or maintaining a current downtilt of a cell antenna with the aim of optimizing some combination of performance measures, including for example cell capacity and cell coverage. When seeking to optimize parameters of communication network cells, it is typical to allocate a local agent to each cell, with each local agent responsible for optimizing a single local parameter (e.g., antenna tilt). In this scenario, local RL agents execute actions independently, either seeking to explore the state-action space of the cell or to exploit existing knowledge of the cell. However, this typical practice does not explicitly consider the impact of actions of each cell on Key Performance Indicators (KPIs) of its neighboring cells, including for example wireless interference. One significant problem associated with independent agent cell parameter management is the possibility of unintentionally creating an undesirable network situation, such as for example a coverage hole between two cells. Another important problem is noisy feedback for RL training. For stable training of an RL agent, the feedback observation (including reward an