EP-4738876-A1 - COLOR CODED CABLING FOR A.I. DRIVEN OPTICAL NETWORKS
Abstract
A structured cabling system has a plurality of leaf switch ports; a plurality of server node ports; and a plurality of patch cords, trunk cables, and patching components configured to follow a coherent color and labeling scheme to implement optical communication network topologies. Each port of the plurality of server node ports is grouped and assigned to a color depending on a node order or location on the rack such that the trail observed at a patch panel for the plurality of leaf switch ports follows a uniform vertical color pattern. The vertical pattern matching a desired logical network topology with the vertical pattern being easy to observe so a risk of misplacing during the installation is be reduced.
Inventors
- HUANG, YU
- KELLY, BRIAN L.
- HIBNER, MAX W.
- WAGNER, ROBERT R.
- REID, ROBERT A.
- SEDOR, THOMAS M.
Assignees
- Panduit Corp.
Dates
- Publication Date
- 20260506
- Application Date
- 20251031
Claims (15)
- A structured cabling system comprising: a plurality of leaf switch ports; a plurality of server node ports; and a plurality of patch cords, trunk cables and patching components configured to follow a coherent color and labeling scheme to implement optical communication network topologies wherein each port of the plurality of server node ports is grouped and assigned to a color depending on a node order or location on the rack, and further wherein a trail observed at a patch panel for the plurality of leaf switch ports follows a uniform vertical color pattern, the vertical pattern matching a desired logical network topology, and wherein the vertical pattern is easy to observe so a risk of misplacing during the installation is reduced.
- The structured cabling system of claim 1, wherein a colored link organizer groups and maintain subunits or groups of the cables, in a specific order to enable fast transposition and simpler deployment.
- The structured cabling system of any preceding claim, wherein the system can be used to scale optical networks from four to thousands of switches.
- The structured cabling system of any preceding claim, wherein the server node ports are grouped using colored link organizers to keep the order of the connections and facilitate an ordered interconnection transposition needed to implement rail-optimized topologies.
- The structured cabling system of any preceding claim, wherein the vertical pattern is made easy to observe by means of at least one of distinct colors, patterns, and codes.
- A structured cabling system comprising: a first computer rack (R1) for a plurality of node servers comprising: a first patching zone (P1) having a plurality of sets of color labels, each set having a respective color of a plurality of colors, and a first node (N1) having color labels having a first color of the plurality of colors, wherein a first plurality of patch cables (200-P1) connects the first node to a first group (P1g1) of ports of the first patching zone having a set of color labels having the first color; a network rack comprising: a first patching panel (P-NL) for connecting the plurality of node servers to a plurality of leaf switches, the first patching panel having a plurality of sets of color labels, each set having a respective color of a plurality of colors, wherein a first trunk cable (200-T1) connects the first group of ports of the first patching zone to a first group (S1) of ports of the first patching panel having a set of color labels having the first color.
- The structured cabling system of claim 6, wherein the first group of ports of the first patching panel is transposed with respect to the first group (P1g1) of ports of the first patching zone.
- The structured cabling system of any of claims 6 to 7, wherein at least one end (300a, 300b, 300c, 300d) of at least one of: the first plurality of patch cables; or the first trunk cable comprises link organizers (300) having the first color.
- The structured cabling system of any of claims 6 to 8, further comprising: a second computer rack (R8) for a plurality of node servers comprising: a second patching zone (P8) having a plurality of sets of color labels, each set having a respective color of the plurality of colors, and a second node (N4) having color labels having a second color of the plurality of colors, wherein a second plurality of patch cables (200-P2) connects the second node to a first group of ports of the second patching zone having a set of color labels having the second color; wherein a second trunk cable (200-T2) connects the first group of ports of the second patching zone to a second group (S32) of ports of the first patching panel having a set of color labels having the second color.
- A structured cabling system comprising: a network rack comprising at least one of: a plurality of leaf switches (L1, ..., L8) having respective first sets of color labels, each first set having a respective color of a plurality of colors, and a first patching panel (P-NL) for connecting a plurality of node servers to the plurality of leaf switches, the first patching panel having a first, optionally rear, side and a second, optionally front, side having a plurality of sets of color labels, each set having a respective color of the plurality of colors, wherein the first side of the first patching panel is configured to receive trunk cables (200-P1, 200P2) from one or more patching zones (R1, ..., R8) of respective one or more computer racks (R1, ..., R8) for the plurality of node servers, wherein a first patch cord (200-P3) connects a first group of ports of the second side of the first patching panel having a set of color labels having a first color to a first group of ports of the leaf switches having a first set of color labels having the first color; or a or the plurality of leaf switches (L1, ..., L8) having respective second sets of color labels, each second set having a respective color of the plurality of colors, and a second patching panel (P-LS) for connecting the plurality of leaf switches to a plurality of spine switches (S1, ..., S8), the second patching panel having a plurality of sets of color labels, each set having a respective color of the plurality of colors, wherein a second patch cord connects a second group of ports of the leaf switches having a second set of color labels having a first color to a first group of ports of the second patching panel having a set of color labels having the first color.
- The structured cabling system of claim 10, wherein the first group of ports of the second patching panel is transposed with respect to the second group of ports of the leaf switches.
- The structured cabling system of any of claims 10 to 11, wherein at least one end (300i, 300j) of at least one of: the first patch cord; or the second patch cord; comprises link organizers (300) having the first color.
- The structured cabling system of any of claims 6 to 9, wherein the network rack is the network rack of any of claims 10 to 12.
- The structured cabling system of any of claims 6 to 13, wherein the colors of the color labels/sets of labels depend on a location of the labels/sets of labels within the structured cabling system.
- The structured cabling system of any of claims 6 to 13, wherein a 'color' comprises at least one of: one or more colors; one or more patterns; or one or more codes, suitable for uniquely identifying interconnections.
Description
FIELD The present disclosure relates to passive elements used in the deployment of data center optical networks and, in particular, to methods and apparatus for efficient and scalable organization of optical fabrics for Artificial Intelligence (AI) data center networks. BACKGROUND An optical interconnection assembly and method for the deployment and scaling of optical networks employing CLOS was introduced by Charles Clos around 1952. Spine-and-Leaf, a type of CLOS topology, is extensively used in data centers. A variant of CLOS topology used in AI networks named rail-optimized networks can further improve network performance by leveraging the high bandwidth and low-latency of internal scale-up networks of the compute nodes, such as NVLINK, to minimize hops, optical-to-electrical conversions across switches of network. Both topologies can become complex to deploy for large networks. Specifically, rail-optimized topology requires a specific interconnection mapping to optimize network performance. The apparatus and methods disclosed here facilitated deployment of the mentioned fabric topologies, enabling simpler installation, maintenance, and future scaling. AI Machine Learning (AI/ML) systems can necessitate immense computing processing capacity, bandwidth, low latency, and especially low "tail-latency," to handle the processing of large foundational models during training or inference. AI/ML systems use specialized networks. Typically, internal scale-up networks such as NVLINK, connect a relatively small number of GPUs, typically 8 to 72, with very high bandwidth electrical links. To expand out to larger number of GPUs, the optical backend network or scale-out network is used. The backend network typically utilizes Infiniband (IB), or Ethernet protocols. When the latter is used, the Ethernet protocol can include additional traffic management enhancements for optimizing network performance. Currently, the back-end of most AI/ML systems relies on a large number of short-distance connections, typically using MPO multifiber connectors/adapters. The common network topologies for these systems are Spine/Leaf or rail-optimized fabrics, used for node-to-switch and switch-to-switch interconnections. Although traditional HPC topologies like Torus, Hypercube, Dragonfly, and Slim Fly are being explored, they are not yet widely adopted in AI/ML networks. Rail-optimized fabrics improve network performance by leveraging the high-speed internal links within nodes (scale-up) to reduce the number of hops in the scale-out the network. Figs. 1A and 1B show an example where GPU 0 in Node A connects to GPU 7 in Node B through Leaf and Spine switches. Without rail optimization, communication between GPUs, such as GPU 1 in Node A and GPU 7 in Node B, would require multiple hops through various switches as shown in Fig. 1A. For example, one hop at Leaf 0 to connect paths Pa-1 to Pa-2, one more hop at one Spine switch to connect Pa-2 to Pa-3 and another one at Leaf 7 to connect Pa-3 to Pa-4. Each hop requires electrical-to-optical, optical-to-electrical conversion, FEC encoding/decoding and switch queuing, all of which add latency. Rail-optimized networks can reduce this latency when GPUs with similar numbers are connected to the same leaf switch. For example, GPU 0 can send data directly to GPU 7 within the same node using its internal high-bandwidth link, while communication between GPU 7 in Node A and GPU 7 in Node B requires only fewer hops as shown in Fig. 1B. Deploying a rail-optimized network requires precise connection and cable mapping. Structured cabling using apparatus for labeling and coloring methods described in this disclosure provides an efficient way to organize the necessary connections and deploy the AI network for achieving optimal network performance. Additionally, the disclosed methods and apparatuses simplify the maintenance of the network and facilitates future scaling of the AI system. SUMMARY A structured cabling system has a plurality of leaf switch ports; a plurality of server node ports; and a plurality of patch cords, trunk cables, and patching components configured to follow a coherent color and labeling scheme to implement optical communication network topologies. Each port of the plurality of server node ports is grouped and assigned to a color depending on a node order or location on the rack such that the trail observed at a patch panel for the plurality of leaf switch ports follows a uniform vertical color pattern. The vertical pattern matching a desired logical network topology with the vertical pattern being easy to observe so a risk of misplacing during the installation is be reduced. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1A illustrates a rail-optimized topology to reduce communication latency in AI networks.Fig. 1B also illustrates a rail-optimized topology to reduce communication latency in AI networks.Fig. 2A shows a logical topology example of an AI system with 32 nodes.Fig. 2B shows a phy