Search

US-20260129683-A1 - DYNAMIC REDISTRIBUTION OF RANDOM ACCESS CHANNEL PREAMBLE RESOURCES

US20260129683A1US 20260129683 A1US20260129683 A1US 20260129683A1US-20260129683-A1

Abstract

A method facilitating dynamic redistribution of random access channel (RACH) preamble resources includes facilitating, by a system including at least one processor, submitting a network performance measurement, relating to a performance indicator of a communication network, to a reinforcement learning model; adjusting, by the system based on an output generated by the reinforcement learning model in response to the network performance measurement, a partitioning of a group of RACH preambles, used by the communication network, between a first subgroup of contention-based RACH preambles and a second subgroup of non-contention-based RACH preambles, resulting in an adjusted partitioning of the RACH preambles; and causing, by the system in response to the adjusting, a base station to transmit data relating to the adjusted partitioning of the RACH preambles via a system information broadcast.

Inventors

  • Ravi Sharma
  • YASSER AL-ERYANI
  • Vikas Arora

Assignees

  • DELL PRODUCTS L.P.

Dates

Publication Date
20260507
Application Date
20241104

Claims (20)

  1. 1 . A system, comprising: at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations, the operations comprising: inputting measurement data, relating to a performance metric associated with a communication network, to a reinforcement learning model; adjusting, based on an output generated by the reinforcement learning model in response to the inputting of the measurement data, an allocation of random access channel (RACH) preambles, used by the communication network, between a first group of contention-based RACH preambles and a second group of non-contention-based RACH preambles, resulting in an adjusted allocation of the RACH preambles; and causing, in response to the adjusting of the allocation of the RACH preambles, a Node B to transmit information relating to the adjusted allocation of the RACH preambles via a system information broadcast.
  2. 2 . The system of claim 1 , wherein the performance metric is selected from a group of performance metrics comprising an average user equipment network access delay, a RACH collision rate, a RACH success rate, a data throughput of the communication network, and a handover success rate.
  3. 3 . The system of claim 1 , wherein the reinforcement learning model generates a state vector, representative of an average of the performance metric over a time window and the allocation of the RACH preambles during the time window, and generates the output based on the state vector.
  4. 4 . The system of claim 3 , wherein the reinforcement learning model generates the output further based on a reward function, the reward function being a function of a weighted sum of averages of performance metrics, comprising the performance metric, over the time window.
  5. 5 . The system of claim 4 , and wherein the reinforcement learning model generates the output based on maximizing an average cumulative expected result of the reward function.
  6. 6 . The system of claim 1 , wherein the operations further comprise: communicating the adjusted allocation of the RACH preambles to the Node B via an application programming interface layer connection to the Node B.
  7. 7 . The system of claim 1 , wherein the operations further comprise: determining that the measurement data indicates a change in the performance metric of at least a threshold amount during a time window, wherein the reinforcement learning model generates the output in response to the determining.
  8. 8 . The system of claim 1 , wherein the operations further comprise: training the reinforcement learning model using offline data, wherein the inputting of the measurement data to the reinforcement learning model is in response to determining that the training of the reinforcement learning model has successfully completed.
  9. 9 . The system of claim 1 , wherein the measurement data is first measurement data, wherein the output of the reinforcement learning model is a first output, and wherein the operations further comprise: storing second outputs generated by the reinforcement learning model based on respective portions of second measurement data given as input to the reinforcement learning model prior to the first measurement data; and in response to determining, with reference to a defined similarity criterion, that the first measurement data exhibits at least a threshold degree of similarity to a portion of the respective portions of the second measurement data, restricting the first output of the reinforcement learning model to be within a threshold variance of a second output, of the second outputs and corresponding to the portion of the respective portions of the second measurement data.
  10. 10 . A method, comprising: facilitating, by a system comprising at least one processor, submitting a network performance measurement, relating to a performance indicator of a communication network, to a reinforcement learning model; adjusting, by the system based on an output generated by the reinforcement learning model in response to the network performance measurement, a partitioning of a group of random access channel (RACH) preambles, used by the communication network, between a first subgroup of contention-based RACH preambles and a second subgroup of non-contention-based RACH preambles, resulting in an adjusted partitioning of the RACH preambles; and causing, by the system in response to the adjusting, a base station to transmit data relating to the adjusted partitioning of the RACH preambles via a system information broadcast.
  11. 11 . The method of claim 10 , wherein the performance indicator is selected from a group of performance indicators comprising an average user equipment network access delay, a RACH collision rate, a RACH success rate, a data throughput of the communication network, and a handover success rate.
  12. 12 . The method of claim 10 , wherein the reinforcement learning model generates a state vector, representative of an average of the performance indicator, as given in the network performance measurement, over a time window and the partitioning of the RACH preambles during the time window, and generates the output based on the state vector.
  13. 13 . The method of claim 12 , wherein the reinforcement learning model generates the output further based on maximizing an average cumulative expected result of a reward function, and wherein the reward function is a function of a weighted sum of averages of performance indicators, comprising the performance indicator, over the time window.
  14. 14 . The method of claim 10 , further comprising: communicating, by the system, the adjusted partitioning of the RACH preambles to the base station via an application programming interface layer connection to the base station.
  15. 15 . The method of claim 10 , wherein the network performance measurement is a first network performance measurement, wherein the output of the reinforcement learning model is a first output, and wherein the method further comprises: storing, by the system, second outputs generated by the reinforcement learning model in response to respective second network performance measurements submitted to the reinforcement learning model prior to the first network performance measurement; and in response to determining, in accordance with a defined similarity metric, that the first network performance measurement exhibits at least a threshold degree of similarity to a second network performance measurement of the second network performance measurements, constraining the first output of the reinforcement learning model to be within a threshold window defined about a second output, of the second outputs and corresponding to the second network performance measurement.
  16. 16 . A non-transitory machine-readable medium comprising computer executable instructions that, when executed by at least one processor, facilitate performance of operations, the operations comprising: providing an input to a machine learning model, the input being indicative of a key performance indicator (KPI) corresponding to performance of a communication network; based on an output generated by the machine learning model in response to the input, adjusting a division of a group of random access channel (RACH) preambles, used by the communication network, into a first subgroup of contention-based RACH preambles and a second subgroup of non-contention-based RACH preambles; and causing, in response to the adjusting of the division, Node B network equipment to transmit a system information broadcast comprising data indicative of the first subgroup of contention-based RACH preambles and the second subgroup of non-contention-based RACH preambles.
  17. 17 . The non-transitory machine-readable medium of claim 16 , wherein the machine learning model generates a state vector, representative of an average of the KPI and the division of the group of RACH preambles during a time window, and generates the output based on the state vector.
  18. 18 . The non-transitory machine-readable medium of claim 17 , wherein the machine learning model generates the output further based on maximizing an average cumulative expected result of a reward function, and wherein the reward function is a function of a weighted sum of averages of KPIs, comprising the KPI, within the time window.
  19. 19 . The non-transitory machine-readable medium of claim 16 , wherein the operations further comprise: communicating the data indicative of the first subgroup of contention-based RACH preambles and the second subgroup of non-contention-based RACH preambles to the Node B network equipment via an application programming interface layer connection to the Node B network equipment.
  20. 20 . The non-transitory machine-readable medium of claim 16 , wherein the input is a first input, wherein the output of the machine learning model is a first output, and wherein the operations further comprise: storing second outputs generated by the machine learning model in response to respective second inputs provided to the machine learning model, the second inputs being indicative of respective states of the KPI of the communication network prior to providing the first input to the machine learning model; and in response to determining that the first input exhibits at least a threshold degree of similarity to a second input of the second inputs, restricting the first output of the machine learning model to be within a threshold amount of a second output, of the second outputs and corresponding to the second input.

Description

BACKGROUND Modern wireless communication networks, such as those built on the Long Term Evolution (LTE) and/or fifth generation (5G) New Radio (NR) standards, rely on the efficient management of random access channel (RACH) preambles for various purposes, e.g., establishing initial communication between user equipment (UE) and a base station and/or facilitating activities like network entry, handovers, or resource requests. RACH preambles are categorized into contention-based preambles, which are used in scenarios in which potential collisions can occur due to multiple UEs simultaneously attempting access, and non-contention-based (or contention-free) preambles, which are used in scenarios in which the risk of collision is minimal. SUMMARY The following summary is a general overview of various embodiments disclosed herein and is not intended to be exhaustive or limiting upon the disclosed embodiments. Embodiments are better understood upon consideration of the detailed description below in conjunction with the accompanying drawings and claims. In an implementation, a system is described herein. The system can include at least one processor and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations. The operations can include inputting measurement data, relating to a performance metric associated with a communication network, to a reinforcement learning model. The operations can further include adjusting, based on an output generated by the reinforcement learning model in response to the inputting of the measurement data, an allocation of random access channel (RACH) preambles, used by the communication network, between a first group of contention-based RACH preambles and a second group of non-contention-based RACH preambles, resulting in an adjusted allocation of the RACH preambles. The operations can additionally include causing, in response to the adjusting of the allocation of the RACH preambles, a Node B to transmit information relating to the adjusted allocation of the RACH preambles via a system information broadcast. In another implementation, a method is described herein. The method can include facilitating, by a system including at least one processor, submitting a network performance measurement, relating to a performance indicator of a communication network, to a reinforcement learning model. The method can also include adjusting, by the system based on an output generated by the reinforcement learning model in response to the network performance measurement, a partitioning of a group of RACH preambles, used by the communication network, between a first subgroup of contention-based RACH preambles and a second subgroup of non-contention-based RACH preambles, resulting in an adjusted partitioning of the RACH preambles. The method can further include causing, by the system in response to the adjusting, a base station to transmit data relating to the adjusted partitioning of the RACH preambles via a system information broadcast. In an additional implementation, a non-transitory machine-readable medium is described herein that can include instructions that, when executed by at least one processor, facilitate performance of operations. The operations can include providing an input to a machine learning model, the input being indicative of a key performance indicator (KPI) corresponding to performance of a communication network; based on an output generated by the machine learning model in response to the input, adjusting a division of a group of RACH preambles, used by the communication network, into a first subgroup of contention-based RACH preambles and a second subgroup of non-contention-based RACH preambles; and causing, in response to the adjusting of the division, Node B network equipment to transmit a system information broadcast comprising data indicative of the first subgroup of contention-based RACH preambles and the second subgroup of non-contention-based RACH preambles. DESCRIPTION OF DRAWINGS Various non-limiting embodiments of the subject disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout unless otherwise specified. FIG. 1 is a block diagram of a system that facilitates dynamic redistribution of random access channel (RACH) preamble resources in accordance with various implementations described herein. FIGS. 2-3 are diagrams illustrating example random access procedures that can be utilized by devices in a communication network in accordance with various implementations described herein. FIGS. 4-8 are block diagrams of additional systems that facilitate dynamic redistribution of RACH preamble resources in accordance with various implementations described herein. FIGS. 9-10 are flow diagrams of respective methods that facilitate dynamic redistribution of RACH preamble resources in accordance with various implementations descr