Search

US-20260129090-A1 - SYSTEM AND METHOD FOR DYNAMICALLY SELECTING EDGE APPLICATION SERVERS

US20260129090A1US 20260129090 A1US20260129090 A1US 20260129090A1US-20260129090-A1

Abstract

Aspects of the subject disclosure may include, for example, a device, having: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations of: subscribing to core network functions of a network that provide statistics of a performance of edge application servers (EAS) implementing an application in the network; aggregating updated model parameters from the EAS and user equipment (UE) to train a horizontal federated learning (HFL) model; receiving the statistics from the core network functions; updating a vertical federated learning (VFL) model based on the statistics received; receiving a request from a UE to use the application; and selecting a first EAS to provide the application based on the HFL model and the VFL model. Other embodiments are disclosed.

Inventors

  • Rohit ABHISHEK
  • Farooq Bari
  • Mohamed Khalil

Assignees

  • AT&T INTELLECTUAL PROPERTY I, L.P.

Dates

Publication Date
20260507
Application Date
20241107

Claims (20)

  1. 1 . A device, comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: subscribing to core network functions of a network that provide statistics of a performance of edge application servers (EAS) implementing an application in the network; aggregating updated model parameters from the EAS and user equipment (UE) to train a horizontal federated learning (HFL) model; receiving the statistics from the core network functions; updating a vertical federated learning (VFL) model based on the statistics received; receiving a request from a UE to use the application; and selecting a first EAS to provide the application based on the HFL model and the VFL model.
  2. 2 . The device of claim 1 , wherein the statistics comprise a load of the EAS.
  3. 3 . The device of claim 1 , wherein the EAS register with the core network functions.
  4. 4 . The device of claim 1 , wherein the EAS train local HFL models and provide the updated model parameters from the local HFL models to the device.
  5. 5 . The device of claim 1 , wherein the EAS train local VFL models and provide updated model parameters for the local VFL models to the device; and wherein the operations further comprise updating the VFL model based on the updated model parameters for the local VFL models.
  6. 6 . The device of claim 1 , wherein the UE trains local HFL models using raw data and provide the updated model parameters from the local HFL models to the device.
  7. 7 . The device of claim 6 , wherein the raw data comprises application requirements, mobility patterns, performance metrics including CPU utilization or battery consumption, or a combination thereof.
  8. 8 . The device of claim 1 , wherein the updated model parameters are provided using a secure multiparty computation method.
  9. 9 . The device of claim 1 , wherein the processing system comprises a plurality of processors operating in a distributed computing environment.
  10. 10 . A non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising: aggregating updated model parameters from edge application servers (EAS) and user equipment (UE) in a network to train a horizontal federated learning (HFL) model; receiving statistics of a performance of the EAS implementing an application in the network; updating a vertical federated learning (VFL) model based on the statistics; receiving a request from a UE to use the application; and selecting a first EAS to provide the application based on the HFL model and the VFL model.
  11. 11 . The non-transitory machine-readable medium of claim 10 , wherein the statistics comprise a load of the EAS.
  12. 12 . The non-transitory machine-readable medium of claim 10 , wherein the EAS register with core network functions.
  13. 13 . The non-transitory machine-readable medium of claim 10 , wherein the EAS train local HFL models and provide the updated model parameters from the local HFL models.
  14. 14 . The non-transitory machine-readable medium of claim 10 , wherein the EAS train local VFL models and provide updated model parameters for the local VFL models; and wherein the operations further comprise updating the VFL model based on the updated model parameters for the local VFL models.
  15. 15 . The non-transitory machine-readable medium of claim 10 , wherein the UE trains local HFL models using raw data and provide the updated model parameters from the local HFL models.
  16. 16 . The non-transitory machine-readable medium of claim 15 , wherein the raw data comprises application requirements, mobility patterns, performance metrics including CPU utilization or battery consumption, or a combination thereof.
  17. 17 . The non-transitory machine-readable medium of claim 10 , wherein the updated model parameters are provided using a secure multiparty computation method.
  18. 18 . The non-transitory machine-readable medium of claim 10 , wherein the processing system comprises a plurality of processors operating in a distributed computing environment.
  19. 19 . A method, comprising: aggregating, by a processing system including a processor, updated model parameters received from edge application servers (EAS) and user equipment (UE) in a network; training, by the processing system, a horizontal federated learning (HFL) model using the updated model parameters; receiving, by the processing system, statistics of a performance of the EAS implementing an application in the network; updating, by the processing system, a vertical federated learning (VFL) model based on the statistics; receiving, by the processing system, a request by a UE to use the application; and selecting, by the processing system, a first EAS to provide the application based on the HFL model and the VFL model.
  20. 20 . The method of claim 19 , wherein the updated model parameters are provided using a secure multiparty computation method.

Description

FIELD OF THE DISCLOSURE The subject disclosure relates to a system and method for dynamically selecting edge application servers. BACKGROUND An edge application server (also known as an edge server) is a type of server located at the edge of a network, closer to the end-users or devices. This proximity helps reduce latency and improve the performance of applications by processing data locally rather than sending it back to a centralized data center. Edge servers are strategically placed near the end-users or devices to minimize the distance data has to travel. This results in faster response times and reduced latency. Edge servers handle data processing tasks locally, which is crucial for applications requiring real-time processing, such as IoT devices, autonomous vehicles, and augmented reality. By processing data at the front edge of the network, these servers reduce the amount of data that needs to be sent over the network, saving bandwidth and reducing congestion. Edge servers are provided with resources that can be scaled out to handle increasing loads, making them suitable for applications with fluctuating demand. The primary criterion used by a network to select an edge server is the physical closeness of the server to the end-users to ensure low latency and high-speed data transfer. However, the network may take into account other aspects of the network, such as network conditions, resource availability, application requirements, load balancing and mobility. Network conditions describe the current state of the network, including traffic load and connectivity quality. Servers having better network conditions are preferred. The network may also consider the server's available resources, such as CPU, memory, and storage to ensure that the server can handle the required processing tasks. Specific needs of the application, such as processing power, storage, and security features, are also considered. Some applications may need specialized hardware or software. To prevent any single server from becoming overloaded, load balancing techniques are used to distribute the workload evenly across multiple servers. For mobile users, the network dynamically selects or switches to the nearest edge server as the user moves, maintaining consistent performance. By considering these factors, networks can effectively select the most suitable edge application server, ensuring optimal performance and user experience. However, edge server selection often relies on static information or limited metrics that neglect factors like real-time performance, security posture, and resource usage. This can lead to suboptimal server choices that impact application performance and user experience. Additionally, traditional approaches lack adaptability to changing network conditions or server behavior. BRIEF DESCRIPTION OF THE DRAWINGS Reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein: FIG. 1 is a block diagram illustrating an exemplary, non-limiting embodiment of a communications network in accordance with various aspects described herein. FIG. 2A is a block diagram illustrating an example, non-limiting embodiment of an edge application server discovery process architecture functioning within the communication network of FIG. 1 in accordance with various aspects described herein. FIG. 2B is a block diagram illustrating an example, non-limiting embodiment of a system for selecting an edge application server functioning within the communication network of FIG. 1 in accordance with various aspects described herein. FIG. 2C depicts an illustrative embodiment of a method in accordance with various aspects described herein. FIG. 3 is a block diagram illustrating an example, non-limiting embodiment of a virtualized communication network in accordance with various aspects described herein. FIG. 4 is a block diagram of an example, non-limiting embodiment of a computing environment in accordance with various aspects described herein. FIG. 5 is a block diagram of an example, non-limiting embodiment of a mobile network platform in accordance with various aspects described herein. FIG. 6 is a block diagram of an example, non-limiting embodiment of a communication device in accordance with various aspects described herein. DETAILED DESCRIPTION The subject disclosure describes, among other things, illustrative embodiments for dynamically selecting an edge application server to provide an application for user equipment. Other embodiments are described in the subject disclosure. One or more aspects of the subject disclosure include a device, having: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations of: subscribing to core network functions of a network that provide statistics of a performance of edge application servers (EAS) implementing an application in the network;