US-12627730-B2 - Cloud agnostic, cross virtual private cloud (VPC) storage cluster deployments on cloud environments
Abstract
A method for managing networking among virtual private clouds (VPCs) includes: deploying a storage array to a storage array VPC; analyzing information to determine which networking option needs to be implemented to map each client VPC to the storage array VPC; making, after the analyzing, a first determination that no networking option is already implemented; making, based on the first determination and the information, a second determination that a first networking option needs to be implemented; calling a plugin with a corresponding identifier of a target client and a mapping; identifying the storage array VPC and the target client VPC; defining a multi-VPC ENI that is exposed to the target client VPC using a first client VPC IP address; and initiating notification of a user that networking between the target client VPC and the storage array VPC is established.
Inventors
- SHOHAM LEVY
- Michal Davidson
Assignees
- DELL PRODUCTS L.P.
Dates
- Publication Date
- 20260512
- Application Date
- 20240930
Claims (20)
- 1 . A method for managing networking among virtual private clouds (VPCs), the method comprising: deploying, upon receiving a request from a user, a storage array to a storage array VPC, wherein a metadata manager (MDM) and each storage data server (SDS) hosted within the storage array VPC are assigned an Internet Protocol (IP) address; defining a set of client VPCs on the MDM; analyzing information provided within the request to determine which networking option of a plurality of networking options needs to be implemented to map each client VPC of the set of client VPCs to the storage array VPC; making, after the analyzing, a first determination that none of the plurality of networking options are already implemented; making, based on the first determination and the information, a second determination that a first networking option of the plurality of networking options needs to be implemented; calling, based on the second determination, a multi-VPC elastic network interface (ENI) plugin with a corresponding identifier of a target client VPC of the set of client VPCs and a mapping between client VPC IP addresses and storage array VPC IP addresses; identifying, via an inter-VPC networking command, the storage array VPC and the target client VPC; defining, for each SDS hosted within the storage array VPC, a multi-VPC ENI that is exposed to the target client VPC using a first client VPC IP address; registering a storage data client (SDC) to the target client VPC; and initiating, in response to the request and via a user interface, notification of the user that networking between the target client VPC and the storage array VPC is established, wherein, via the networking, the SDC reads data from or writes data to an SDS of the storage array VPC using a second client VPC IP address.
- 2 . The method of claim 1 , wherein the SDS is a storage server, wherein, after installing an SDS application on a server, the user converts the server into the storage server.
- 3 . The method of claim 1 , wherein the SDC is a consumer server, wherein, after installing an SDC application on a server, the user converts the server into the consumer server, wherein the SDC application is a kernel-level driver.
- 4 . The method of claim 3 , wherein the SDC calls the MDM to obtain an SDS mapping specifying that the SDC needs to connect to the SDS to read data from or write data to the storage array, wherein the storage array represents virtual pools of block storage.
- 5 . The method of claim 1 , wherein the second determination is performed by comparing the first networking option of the plurality of networking options to a second networking option of the plurality of networking options, a third networking option of the plurality of networking options, and a fourth networking option of the plurality of networking options based on each networking option's performance details with respect to user-defined parameters specified in the information.
- 6 . The method of claim 5 , wherein the user-defined parameters specify at least one selected from a group consisting of cost of implementing a particular networking option using a related plugin, a number of VPCs that the particular networking option supports, data latency performance of the particular networking option, data throughput performance of the particular networking option, and availability of the particular networking option in the user's geographic location.
- 7 . The method of claim 1 , wherein the storage array VPC executes on a first geographic location, wherein the target client VPC executes on a second geographic location, wherein the first geographic location and the second geographic location are distinct locations.
- 8 . A method for managing networking among virtual private clouds (VPCs), the method comprising: deploying, upon receiving a request from a user, a storage array to a storage array VPC, wherein a metadata manager (MDM) and each storage data server (SDS) hosted within the storage array VPC are assigned an Internet Protocol (IP) address; defining a set of client VPCs on the MDM; analyzing information provided within the request to determine which networking option of a plurality of networking options needs to be implemented to map each client VPC of the set of client VPCs to the storage array VPC; making, after the analyzing, a first determination that none of the plurality of networking options are already implemented; making, based on the first determination and the information, a second determination that a first networking option of the plurality of networking options is not available to be implemented; making, based on the second determination, a third determination that a second networking option of the plurality of networking options needs to be implemented; calling, based on the third determination, a private link plugin with a corresponding identifier of a target client VPC of the set of client VPCs and a mapping between client VPC IP addresses and storage array VPC IP addresses of storage data targets (SDTs); identifying, via an inter-VPC networking command, the storage array VPC and the target client VPC; defining, based on the mapping, a load balancer in the storage array VPC and a private link from the target client VPC to the load balancer; registering a storage data client (SDC) to the target client VPC; and initiating, in response to the request and via a user interface, notification of the user that networking between the target client VPC and the storage array VPC is established, wherein, via the networking, the SDC reads data from or writes data to an SDS of the storage array VPC using a client VPC IP address over the private link.
- 9 . The method of claim 8 , wherein the SDS is a storage server, wherein, after installing an SDS application on a server, the user converts the server into the storage server.
- 10 . The method of claim 8 , wherein the SDC is a consumer server, wherein, after installing an SDC application on a server, the user converts the server into the consumer server, wherein the SDC application is a kernel-level driver.
- 11 . The method of claim 10 , wherein the SDC calls the MDM to obtain an SDS mapping specifying that the SDC needs to connect to the SDS to read data from or write data to the storage array, wherein the storage array represents virtual pools of block storage.
- 12 . The method of claim 8 , wherein the third determination is performed by comparing the second networking option of the plurality of networking options to a third networking option of the plurality of networking options and a fourth networking option of the plurality of networking options based on each networking option's performance details with respect to user-defined parameters specified in the information.
- 13 . The method of claim 12 , wherein the user-defined parameters specify at least one selected from a group consisting of cost of implementing a particular networking option using a related plugin, a number of VPCs that the particular networking option supports, data latency performance of the particular networking option, data throughput performance of the particular networking option, and availability of the particular networking option in the user's geographic location.
- 14 . The method of claim 8 , wherein the storage array VPC executes on a first geographic location, wherein the target client VPC executes on a second geographic location, wherein the first geographic location and the second geographic location are distinct locations.
- 15 . A method for managing networking among virtual private clouds (VPCs), the method comprising: deploying, upon receiving a request from a user, a storage array to a storage array VPC, wherein a metadata manager (MDM) and each storage data server (SDS) hosted within the storage array VPC are assigned an Internet Protocol (IP) address; defining a set of client VPCs on the MDM; analyzing information provided within the request to determine which networking option of a plurality of networking options needs to be implemented to map each client VPC of the set of client VPCs to the storage array VPC; making, after the analyzing, a first determination that none of the plurality of networking options are already implemented; making, based on the first determination and the information, a second determination that a first networking option of the plurality of networking options is not available to be implemented; making, based on the second determination, a third determination that a second networking option of the plurality of networking options is not available to be implemented; making, based on the third determination, a fourth determination that a third networking option of the plurality of networking options needs to be implemented; defining, based on the fourth determination, an IP-to-port multiplexer in each client VPC and a port-to-IP demultiplexer in the storage array VPC; assigning an IP address to the port-to-IP demultiplexer, wherein, based on the assigning and for each SDS in the storage array VPC, a mapping on the port-to-IP demultiplexer is defined, wherein, based on the mapping, a load balancer is defined in the storage array VPC; calling, after the assigning, a private link plugin with a corresponding identifier of a target client VPC of the set of client VPCs and a second mapping between client VPC IP addresses and storage array VPC IP addresses; identifying, via an inter-VPC networking command, the storage array VPC and the target client VPC; defining, based on the second mapping, a third mapping from a client VPC IP address of the target client VPC to a related port's IP address in the storage array; defining, based on the third mapping, a private link from an IP-to-port multiplexer of the target client VPC to the load balancer; registering a storage data client (SDC) to the target client VPC; and initiating, in response to the request and via a user interface, notification of the user that networking between the target client VPC and the storage array VPC is established, wherein, via the networking, the SDC reads data from or writes data to an SDS of the storage array VPC using the client VPC IP address over the private link.
- 16 . The method of claim 15 , wherein the SDS is a storage server, wherein, after installing an SDS application on a server, the user converts the server into the storage server.
- 17 . The method of claim 15 , wherein the SDC is a consumer server, wherein, after installing an SDC application on a server, the user converts the server into the consumer server, wherein the SDC application is a kernel-level driver.
- 18 . The method of claim 17 , wherein the SDC calls the MDM to obtain an SDS mapping specifying that the SDC needs to connect to the SDS to read data from or write data to the storage array, wherein the storage array represents virtual pools of block storage.
- 19 . The method of claim 15 , wherein the third determination is performed by comparing the second networking option of the plurality of networking options to the third networking option of the plurality of networking options and a fourth networking option of the plurality of networking options based on each networking option's performance details with respect to user-defined parameters specified in the information.
- 20 . The method of claim 19 , wherein the user-defined parameters specify at least one selected from a group consisting of cost of implementing a particular networking option using a related plugin, a number of VPCs that the particular networking option supports, data latency performance of the particular networking option, data throughput performance of the particular networking option, and availability of the particular networking option in the user's geographic location.
Description
BACKGROUND Devices are often capable of performing certain functionalities that other devices are not configured to perform, or are not capable of performing. In such scenarios, it may be desirable to adapt one or more systems to enhance the functionalities of devices that cannot perform those functionalities. BRIEF DESCRIPTION OF DRAWINGS Certain embodiments disclosed herein will be described with reference to the accompanying drawings. However, the accompanying drawings illustrate only certain aspects or implementations of one or more embodiments disclosed herein by way of example, and are not meant to limit the scope of the claims. FIG. 1.1 shows a diagram of a system in accordance with one or more embodiments disclosed herein. FIG. 1.2 shows an example networking established among VPCs using a multi-VPC elastic network interface (ENI) plugin in accordance with one or more embodiments disclosed herein. FIG. 1.3 shows an example networking established among VPCs using a private link plugin in accordance with one or more embodiments disclosed herein. FIG. 1.4 shows an example networking established among VPCs using a private link plugin in accordance with one or more embodiments disclosed herein. FIG. 1.5 shows an example networking established among VPCs using a network address translation (NAT) plugin in accordance with one or more embodiments disclosed herein. FIGS. 2.1-2.6 show a method for managing networking among VPCs in accordance with one or more embodiments disclosed herein. FIG. 3 shows a diagram of a computing device in accordance with one or more embodiments disclosed herein. DETAILED DESCRIPTION Specific embodiments disclosed herein will now be described in detail with reference to the accompanying figures. In the following detailed description of the embodiments disclosed herein, numerous specific details are set forth in order to provide a more thorough understanding of one or more embodiments disclosed herein. However, it will be apparent to one of ordinary skill in the art that the one or more embodiments disclosed herein may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. In the following description of the figures, any component described with regard to a figure, in various embodiments disclosed herein, may be equivalent to one or more like-named components described with regard to any other figure. For brevity, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments disclosed herein, any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure. Throughout this application, elements of figures may be labeled as A to N. As used herein, the aforementioned labeling means that the element may include any number of items, and does not require that the element include the same number of elements as any other item labeled as A to N. For example, a data structure may include a first element labeled as A and a second element labeled as N. This labeling convention means that the data structure may include any number of the elements. A second data structure, also labeled as A to N, may also include any number of elements. The number of elements of the first data structure, and the number of elements of the second data structure, may be the same or different. Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. As used herein, the phrase operatively connected, or operative connection, means that there exists between elements/components/devices a direct or indirect connection that allows the elements to interact with one another in some way. For example, the phrase “operatively connected” may refer to any direct connection (e.g., wired directly between two devices or components) or indirect connection (e.g., wired and/or wireless connections between any number of