Search

CN-122019121-A - Method for managing hardware resources in an open RAN cloud platform, and computer program

CN122019121ACN 122019121 ACN122019121 ACN 122019121ACN-122019121-A

Abstract

A method of managing hardware resources in an open radio access network (open RAN) cloud platform is provided. The cloud platform includes a plurality of processor cores. The cloud platform is configured to host a plurality of application processes. The method includes dynamically assigning zero or more of the plurality of processor cores to each of the plurality of application processes based on processing requirements of the respective application processes.

Inventors

  • A.LUO
  • Z.LU

Assignees

  • 沃达丰集团服务有限公司

Dates

Publication Date
20260512
Application Date
20251112
Priority Date
20241112

Claims (16)

  1. 1. A method of managing hardware resources in an open radio access network (open RAN) cloud platform, wherein the cloud platform comprises a plurality of processor cores, wherein the cloud platform is configured to host a plurality of application processes, the method comprising: Zero or more of the plurality of processor cores are dynamically allocated to each of the plurality of application processes based on processing requirements of the respective application process.
  2. 2. The method of claim 1, wherein dynamically assigning zero or more cores to each of the plurality of applications comprises assigning zero cores to an application process if the application process's respective processing requirements are zero.
  3. 3. The method of claim 1 or claim 2, wherein the cloud platform is a containerized cloud platform, and wherein each of the application processes is hosted via a respective Pod managed by the containerized cloud platform.
  4. 4. The method of any preceding claim, wherein one or more of the plurality of processor cores are dedicated cores allocated to a scheduler of the cloud platform.
  5. 5. The method of claim 4, wherein each of the plurality of processor cores that is not one or more dedicated cores forms a resource pool, and wherein dynamically assigning zero or more of the plurality of processor cores to each of the plurality of application processes comprises dynamically assigning zero or more processor cores from the resource pool to each of the plurality of application processes.
  6. 6. A method according to any preceding claim, wherein the processing requirements of each application process comprise forecasted processing requirements, wherein the model is used to forecast processing requirements for the application process.
  7. 7. The method of claim 6, wherein the model is an artificial intelligence AI model.
  8. 8. The method of claim 6 or 7, wherein the model is trained using historical data of processing requirements of an application process or via synthetic training data.
  9. 9. The method of any of claims 6 to 8, wherein each of the plurality of processor cores is operable in one or more processor idle sleep states, wherein the method further comprises: a processor core of the plurality of processor cores is transitioned to an idle sleep state based on a forecasted processing requirement of a respective application process to which the processor core is assigned.
  10. 10. The method of any of claims 6 to 9, wherein each of the plurality of processor cores is operable in one or more power performance states, wherein the method further comprises: a processor core of the plurality of processor cores is transitioned to a power performance state based on a forecasted processing requirement of a respective application process to which the processor core is assigned.
  11. 11. A method according to any preceding claim, wherein the processing requirements comprise data processing load and/or network traffic load.
  12. 12. The method of any preceding claim, wherein each of the plurality of processor cores is operable in a normal mode and one or more power saving modes, the method further comprising: Switching one or more of the plurality of processor cores to a power saving mode based on processing requirements of the plurality of application processes, and/or One or more of the plurality of processor cores are transitioned from a power saving mode to a normal mode based on processing requirements of the plurality of application processes.
  13. 13. The method of claim 12, wherein the one or more power saving modes comprise: One or more processor idle sleep states, and/or One or more power performance states.
  14. 14. A method according to any preceding claim, wherein the cloud platform comprises one or more servers, preferably a plurality of servers, wherein the plurality of processor cores comprises a respective plurality of processor cores from each server.
  15. 15. A cloud platform configured to perform the method of any preceding claim.
  16. 16. A computer program comprising instructions which, when executed on a processor, cause the processor to perform the method of any one of claims 1 to 13.

Description

Method for managing hardware resources in an open RAN cloud platform, and computer program Technical Field The invention relates to management of hardware resources in an open radio access network cloud platform. In particular, the present invention relates to a method of dynamically allocating CPU cores to application processes. Vocabulary list RAN-radio access network MNO-Mobile network operator O-RAN-open RAN alliance O-DU-open distributed unit O-CU-open central unit O-RU-open radio unit OS-operating system GPU-graphics processing unit API-application programming interface SMO-service management and coordination DMS-deployment management service NF-network function IMS-infrastructure management services COTS commercial off-the-shelf supply CaaS Container-service CPU-central processing unit TTI-transmission time interval MIMO-multiple input multiple output UE-user equipment BS-base station ABS-advanced base station BTS-base transceiver station BSS-basic service set ESS-extended service set AP-access point NB-node B (radio base station receiver) ENBs-evolved node bs GNB-next generation node B TRP-transmission and reception points PS-processing server TE-terminal equipment MS-mobile station MT mobile terminal UT-user terminal SS-subscriber station PDA-personal digital assistant CDMA-CDMA FDMA-FDMA TDMA-time division multiple access OFDMA-OFDMA SC-FDMA-Single Carrier frequency division multiple Access MC-FDMA-multicarrier frequency division multiple access UTRA-universal terrestrial radio access GSM-Global System for Mobile communications GPRS-general packet radio service Enhanced data rates for EDGE-GSM evolution IEEE-institute of Electrical and electronics Engineers Association E-UTRA-evolved UTRA UMTS-universal mobile telecommunication system E-UMTS-evolved UMTS 3 GPP-third Generation partnership project DL-downlink UL-uplink LTE-Long term evolution (4G) LTE-A-advanced LTE NR-New radio (5G) FDD-frequency division duplex TDD-time division duplex CRS-cell-specific reference signals CSI-RS-channel state information reference signal FPGA field programmable gate array ASIC-specific integrated circuit DSP-digital signal processor CD-ROM-compact disc read-only memory DVD-ROM-digital multifunctional disk read-only memory ROM-ROM RAM-RAM EEPROM-electrically erasable programmable read-only memory EPROM-erasable programmable read-only memory. Background An open RAN is a technical architecture concept that aims to decouple the hardware and software components of a Radio Access Network (RAN). It is a RAN that includes an open interoperability interface and virtualization. In prior art (non-open) RANs, hardware and software components are typically proprietary. Non-open RAN devices are typically obtained from a single vendor to ensure seamless functionality, security, and efficiency. In contrast, open RANs introduce open standards for both hardware and software that enable interoperability between various network elements. For mobile network operators (mnos), the open RAN has strategic importance as it promotes diversity of suppliers, allowing integration of new suppliers and enhancing the flexibility of the supply chain. It also brings energy efficiency gains by enabling targeted improvements in specific areas of the RAN. Furthermore, open RANs facilitate innovation and competition by providing a more dynamic and efficient network environment. Furthermore, it provides opportunities to work with professional vendors and facilitates resource optimization by allowing software upgrades without requiring replacement hardware. Open RANs are important in the long-term network innovation strategy of MnO, which provides energy efficiency, supply chain diversity, elasticity enhancement, and promotes innovation and competition. Fig. 1 illustrates some elements of an example open RAN system 100 implemented as a Cloud computing platform (O-Cloud). The system 100 may be described with reference to different hardware and software layers of a platform. At the O-Cloud node layer 110, the system includes one or more physical infrastructure nodes 120A, 120N that meet the O-RAN requirements. Each physical infrastructure node 120A includes computing 121, networking 122, GPU 123, and storage 124 components, as well as acceleration techniques 125 for RAN operations (such as forward error correction and other computation-intensive operations offloaded to dedicated hardware). Each physical infrastructure node 120A, 120N is configured to host an associated O-RAN network function 150, 160 implemented at the open RAN application layer 140. The network functions 150, 160 implemented at the open RAN application layer 140 may include an O-CU 160, an O-DU 150, and an O-RU. At the O-Cloud hypervisor or container/OS layer 130, there is a set of Cloud functions to enable open RAN applications 150, 160 to run on one or more O-Cloud hardware nodes 120A. Cloud functionality may include supporting software components such as an operating system, a containe