EP-4507275-B1 - MESSAGE TRANSMISSION BETWEEN POINTS OF PRESENCE
Inventors
- A, CHANDRASEKHAR
- R, Jayanthi
Dates
- Publication Date
- 20260513
- Application Date
- 20231213
Claims (12)
- A method, comprising: receiving (1010), by one or more network devices associated with a first point of presence, POP, in a first cloud deployment, a message associated with a tenant; identifying (1020), by the one or more network devices, based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments, wherein identifying the one or more second POPs comprises: identifying an indication, in the message, that the message is associated with a global set of POPs or a subset of POPs associated with the tenant, and identifying, based at least in part on identifying the indication that the message is associated with the global set of POPs or the subset of POPs, the one or more second POPs; buffering the message in: if the message is associated with the global set of POPs, one or more queues respectively associated with the one or more second POPs or a central POP, and if the message is associated with the subset of POPs, one or more queues respectively associated with the one or more second POPs; and transmitting (1030), by the one or more network devices, the message to the one or more second POPs.
- The method of claim 1, wherein the message includes one or more of a logical grouping associated with the message, a payload, or a message key associated with an order of a plurality of messages, including the message, that are associated with the message key.
- The method of claim 1 or claim 2, further comprising: buffering the message in a queue associated with network federation.
- The method of any preceding claim, further comprising: receiving, from the one or more second POPs, one or more acknowledgements associated with the message.
- The method of any preceding claim, further comprising: determining that a time window associated with acknowledgements associated with the message has expired; and retransmitting the message to the one or more second POPs.
- The method of any preceding claim, wherein identifying the one or more second POPs includes: identifying the one or more second POPs based at least in part on a database that maps POPs, including the first POP and the one or more second POPs, to the tenant; and optionally wherein the database maintains a dynamic mapping of the POPs to the tenant; and/or the database maintains a dynamic indication of which of the POPs is a central POP for the tenant.
- One or more network devices (900) associated with a first point of presence, POP, in a first cloud deployment, the one or more network devices comprising: one or more memories (930); and one or more processors (920) to: receive a message associated with a tenant; identify, based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments, wherein identifying the one or more second POPs comprises: identifying an indication, in the message, that the message is associated with a global set of POPs or a subset of POPs associated with the tenant, and identifying, based at least in part on identifying the indication that the message is associated with the global set of POPs or the subset of POPs, the one or more second POPs; buffer the message in: if the message is associated with the global set of POPs, one or more queues respectively associated with the one or more second POPs or a central POP, and if the message is associated with the subset of POPs, one or more queues respectively associated with the one or more second POPs; and transmit the message to the one or more second POPs.
- The one or more network devices of claim 7, wherein the one or more processors are further to: receive a plurality of messages, including the message, from a plurality of tenants, including the tenant; and load-balance the plurality of messages across a plurality of resources; and optionally wherein the one or more processors, when load-balancing the plurality of messages, are to: load-balance the plurality of messages based on a plurality of first weights associated with a plurality of partitions and a plurality of second weights associated with the plurality of tenants; and further optionally wherein one or more first resources of the plurality of resources are associated with one or more first partitions, one or more second resources of the plurality of resources are associated with one or more second partitions, a first priority of the one or more first partitions is higher than a second priority of the one or more second partitions, and the one or more processors, when load-balancing the plurality of messages, are to: load-balance one or more first messages of the plurality of messages among the one or more first partitions based at least in part on the one or more first messages being associated with a first tenant, of the plurality of tenants, that is associated with the first priority, or load-balance one or more second messages of the plurality of messages among the one or more second partitions based at least in part on the one or more second messages being associated with a second tenant, of the plurality of tenants, that is associated with the second priority.
- The one or more network devices of any of claims 7-8, wherein the one or more processors are further to: predict a message rate associated with the tenant.
- A computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of one or more network devices associated with a first point of presence, POP, in a first cloud deployment, cause the one or more network devices to: receive (110) a message associated with a tenant; identify (120), based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments, wherein identifying the one or more second POPs comprises: identifying an indication, in the message, that the message is associated with a global set of POPs or a subset of POPs associated with the tenant, and identifying, based at least in part on identifying the indication that the message is associated with the global set of POPs or the subset of POPs, the one or more second POPs; buffer the message in: if the message is associated with the global set of POPs, one or more queues respectively associated with the one or more second POPs or a central POP, and if the message is associated with the subset of POPs, one or more queues respectively associated with the one or more second POPs; and transmit (130) the message to the one or more second POPs.
- The computer-readable medium of claim 13, wherein the message is associated with a service, and wherein the one or more instructions further cause the one or more network devices to: buffer the message in a queue associated with the service; and/or wherein the message includes a message key associated with an order of a plurality of messages, including the message, that are associated with the message key.
- The computer-readable medium of claim 10 or claim 11, wherein the one or more instructions that cause the one or more network devices to transmit the message to the one or more second POPs cause the one or more network devices to: transmit the message to the one or more second POPs based at least in part on a queue size of a queue associated with the message satisfying a configured batch size threshold or a batch time associated with the queue satisfying a configured batch time threshold.
Description
BACKGROUND A secure access service edge (SASE) architecture integrates networking and security while providing direct, protected access for geographically dispersed users. Secure service edge (SSE) capabilities leverage the cloud to optimize network and security experiences. SASE deployments can include central deployments and point of presence (POP) cloud deployments. US 2014/022951 A1 relates to a logical inter-cloud dispatcher and establishing a network communication path between a source node and a destination node. US 2022/086220 A1 relates to traffic load balancing between a plurality of points of presence of a cloud computing infrastructure. SUMMARY Particular aspects are set out in the appended independent claims. Various optional embodiments are set out in the dependent claims. Some implementations described herein relate to a method. The method may include receiving, by one or more network devices associated with a first point of presence (POP) in a first cloud deployment, a message associated with a tenant. The method may include identifying, by the one or more network devices, based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments. The method may include transmitting, by the one or more network devices, the message to the one or more second POPs. Some implementations described herein relate to one or more network devices. The one or more network devices may include one or more memories and one or more processors. The one or more processors may receive a message associated with a tenant. The one or more processors may identify, based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments. The one or more processors may transmit the message to the one or more second POPs. Some implementations described herein relate to a non-transitory computer-readable medium that stores a set of instructions. The set of instructions includes one or more instructions that, when executed by one or more processors of one or more network devices, may cause the one or more network devices to receive a message associated with a tenant. The one or more instructions, when executed by one or more processors of the one or more network devices, may cause the one or more network devices to identify, based at least in part on the message, one or more second POPs, associated with the tenant, in one or more second cloud deployments. The one or more of instructions, when executed by one or more processors of the one or more network devices, may cause the one or more network devices to transmit the message to the one or more second POPs. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a diagram of an example implementation associated with message transmission between POPs.Fig. 2 is a diagram of an example implementation associated with a system including a plurality of POPs in respective cloud deployments and a region manager.Fig. 3 is a diagram of an example implementation associated with queue topology in a POP deployment.Fig. 4 is a diagram of an example implementation associated with programmability support.Fig. 5 is a diagram of an example implementation associated with a multi-tenant software-as-a-service (SAAS) environment.Fig. 6 is a diagram of an example implementation associated with load-balancing based on partitions.Fig. 7 is a diagram of an example implementation associated with programmable federation exclusiveness in message bus proxies in multi-cloud environments.Fig. 8 is a diagram of an example environment in which systems and/or methods described herein may be implemented.Fig. 9 is a diagram of example components of a device associated with transmission of messages between POPs.Fig. 10 is a flowchart of an example process associated with message transmission between POPs. DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. In some tenant deployment topologies, messages that are transmitted among POPs without geographic constraints may fail to comply with general data protection regulation (GDPR) requirements. Some tenant deployment topologies may also lack programmability support for dynamic addition and/or removal of tenants and POPs. Furthermore, in SAAS systems containing multiple services that share resources, a chatty tenant can consume most or all resources, which can impact other tenants and ultimately lead to unfair sharing of resources across tenants. For example, operations of one tenant can use resources that would otherwise be allocated for operations of other tenants. For example, bulk device operations of one tenant can queue (e.g., delay) simple device operation tasks of another tenant. Moreover, in cloud-based systems containing multiple services that share resources, a chatty service can consume