US-12621963-B2 - Server air cross-transfer blanking for datacenter cooling systems
Abstract
Systems and methods for cooling a datacenter are disclosed. In at least one embodiment, one or more blanks may be associated with a motorized subsystem and can be used with one or more server openings on a rack, so that the motorized subsystem can cause the one or more blanks to close or open an individual server opening based in part on a change within the individual server opening.
Inventors
- Ali Heydari
Assignees
- NVIDIA CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20210706
Claims (20)
- 1 . A datacenter cooling system, comprising: one or more blanks, wherein the one or more blanks are sized to block an individual server opening; a motorized subsystem, wherein the one or more blanks are movable by the motorized subsystem with respect to the individual server opening; and one or more sensors mounted proximate the individual server opening and configured to detect an absence of a server tray in the individual server opening or a presence of the server tray in the individual server opening, wherein the motorized subsystem is to cause the one or more blanks to block airflow through a cross-section of the individual server opening based on first sensor data from the one or more sensors indicating the absence of the server tray from the individual server opening or to unblock airflow through the cross-section of the individual server opening based on second sensor data from the one or more sensors indicating the presence of the server tray in the individual server opening.
- 2 . The datacenter cooling system of claim 1 , further comprising: stowing areas to stow the one or more blanks when opened with respect to the individual server opening.
- 3 . The datacenter cooling system of claim 1 , further comprising: at least one sensor associated with the individual server opening, the at least one sensor to enable determination of the presence or the absence of the server tray in the individual server opening.
- 4 . The datacenter cooling system of claim 3 , wherein the at least one sensor is further to enable determination of a change in air pressure or temperature associated with the individual server opening.
- 5 . The datacenter cooling system of claim 1 , further comprising: at least one processor to receive sensor inputs from at least one sensor, the at least one processor to activate or deactivate the motorized subsystem to cause the one or more blanks to close or to open the individual server opening based in part on the sensor inputs.
- 6 . The datacenter cooling system of claim 5 , further comprising: one or more neural networks to receive the sensor inputs from at least one sensor associated with the individual server opening, the one or more neural networks to infer the presence or the absence of the server tray in the individual server opening.
- 7 . The datacenter cooling system of claim 1 , further comprising: at least one physical connector coupled with the motorized subsystem to enable movement of the one or more blanks based in part on the motorized subsystem activated in one or more directions.
- 8 . The datacenter cooling system of claim 1 , further comprising: a default configuration associated with the motorized subsystem or the one or more blanks, the default configuration to enable the one or more blanks to be in an open or a closed configuration with respect to the individual server opening and with the motorized subsystem being in a deactivated configuration.
- 9 . The datacenter cooling system of claim 1 , further comprising: an override configuration associated with the one or more blanks to enable override of the motorized subsystem.
- 10 . The datacenter cooling system of claim 9 , further comprising: at least one processor to cause a change in an air cooling subsystem to reduce an air pressure difference within the individual server opening during opening or closing of the individual server opening by the motorized subsystem.
- 11 . A processor comprising one or more circuits, the one or more circuits to receive, from one or more sensors mounted proximate an individual server opening in a rack and configured to detect an absence of a server tray in the individual server opening or a presence of the server tray in the individual server opening, a first sensor input indicative of the presence of the server tray in the individual server opening or a second sensor input indicative of the absence of the server tray from the individual server opening, the processor to cause a motorized subsystem to cause one or more blanks to block airflow through a cross-section of the individual server opening based on the second sensor input indicating the absence of the server tray from the individual server opening or to unblock airflow through the cross-section of the individual server opening based on the first sensor input indicating the presence of the server tray in the individual server opening.
- 12 . The processor of claim 11 , further comprising: an input to receive at least one of the first sensor input or the second sensor input, the first sensor input or the second sensor input associated with the presence or the absence of the server tray in the individual server opening.
- 13 . The processor of claim 12 , wherein at least one of the first sensor input or the second sensor input is further associated with a change in air pressure or temperature associated with the individual server opening.
- 14 . The processor of claim 11 , further comprising: an output to the motorized subsystem, the output to indicate a change for the one or more blanks or to indicate a default position for the one or more blanks.
- 15 . The processor of claim 11 , further comprising: one or more neural networks to receive at least one of the first sensor input or the second sensor input and to infer the presence or the absence of the server tray in the individual server opening.
- 16 . A method for datacenter cooling system, comprising: providing one or more blanks actuatable by a motorized subsystem; determining, using one or more sensors mounted proximate an individual server opening in a rack, a presence or an absence of a server tray in the individual server opening, wherein the one or more blanks are sized to block the individual server opening; and enabling the motorized subsystem to cause the one or more blanks to block airflow through a cross-section of the individual server opening based on first sensor data from the one or more sensors indicating the absence of the server tray from the individual server opening or to unblock airflow through the cross-section of the individual server opening based on second sensor data from the one or more sensors indicative the presence of the server tray in the individual server opening.
- 17 . The method of claim 16 , further comprising: enabling stowing areas to stow the one or more blanks when opened with respect to the individual server opening.
- 18 . The method of claim 16 , further comprising: receiving, in at least one processor, at least one sensor input associated with at least one sensor that is associated with the individual server opening; determining, using the at least one processor, the presence or the absence of the server tray in the individual server opening; and causing activation or deactivation of the motorized subsystem.
- 19 . The method of claim 18 , further comprising: enabling one or more neural networks to receive the at least one sensor input from the at least one sensor; and enabling the one or more neural networks to infer the presence or the absence of the server tray in the individual server opening.
- 20 . The method of claim 16 , further comprising: enabling a default configuration or an override configuration to be associated with the motorized subsystem or the one or more blanks, the default configuration to enable the one or more blanks to be in an open or a closed configuration with respect to the individual server opening and with the motorized subsystem being in a deactivated configuration, and the override configuration associated with the one or more blanks to enable override of the motorized subsystem.
Description
FIELD At least one embodiment pertains to cooling systems, including systems and methods for operating those cooling systems. In at least one embodiment, such a cooling system can be utilized in a datacenter containing one or more racks or computing servers. BACKGROUND Datacenter cooling systems use fans to circulate air through server components. Certain supercomputers or other high capacity computers may use water or other cooling systems instead of air-cooling systems to draw heat away from the server components or racks of the datacenter to an area external to the datacenter. The cooling systems may include a chiller within the datacenter area, which may include area external to the datacenter itself. Further, the area external to the datacenter may include a cooling tower or other external heat exchanger that receives heated coolant from the datacenter and that disperses the heat by forced air or other means to the environment (or an external cooling medium). The cooled coolant is recirculated back into the datacenter. The chiller and the cooling tower together form a chilling facility. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an exemplary datacenter cooling system subject to improvements described in at least one embodiment; FIG. 2 illustrates server-level features and cold plate details associated with a server cross-air transfer blanking for a datacenter cooling system, according to at least one embodiment; FIG. 3 illustrates rack-level features and cold plate details associated with a server cross-air transfer blanking for a datacenter cooling system, according to at least one embodiment; FIG. 4 illustrates datacenter-level features associated with a server cross-air transfer blanking for a datacenter cooling system, according to at least one embodiment; FIG. 5 illustrates a method associated with a datacenter cooling system of FIGS. 2-4, according to at least one embodiment; FIG. 6 illustrates a distributed system, in accordance with at least one embodiment; FIG. 7 illustrates an exemplary datacenter, in accordance with at least one embodiment; FIG. 8 illustrates a client-server network, in accordance with at least one embodiment; FIG. 9 illustrates a computer network, in accordance with at least one embodiment; FIG. 10A illustrates a networked computer system, in accordance with at least one embodiment; FIG. 10B illustrates a networked computer system, in accordance with at least one embodiment; FIG. 10C illustrates a networked computer system, in accordance with at least one embodiment; FIG. 11 illustrates one or more components of a system environment in which services may be offered as third-party network services, in accordance with at least one embodiment; FIG. 12 illustrates a cloud computing environment, in accordance with at least one embodiment; FIG. 13 illustrates a set of functional abstraction layers provided by a cloud computing environment, in accordance with at least one embodiment; FIG. 14 illustrates a supercomputer at a chip level, in accordance with at least one embodiment; FIG. 15 illustrates a supercomputer at a rack module level, in accordance with at least one embodiment; FIG. 16 illustrates a supercomputer at a rack level, in accordance with at least one embodiment; FIG. 17 illustrates a supercomputer at a whole system level, in accordance with at least one embodiment; FIG. 18A illustrates inference and/or training logic, in accordance with at least one embodiment; FIG. 18B illustrates inference and/or training logic, in accordance with at least one embodiment; FIG. 19 illustrates training and deployment of a neural network, in accordance with at least one embodiment; FIG. 20 illustrates an architecture of a system of a network, in accordance with at least one embodiment; FIG. 21 illustrates an architecture of a system of a network, in accordance with at least one embodiment; FIG. 22 illustrates a control plane protocol stack, in accordance with at least one embodiment; FIG. 23 illustrates a user plane protocol stack, in accordance with at least one embodiment; FIG. 24 illustrates components of a core network, in accordance with at least one embodiment; FIG. 25 illustrates components of a system to support network function virtualization (NFV), in accordance with at least one embodiment; FIG. 26 illustrates a processing system, in accordance with at least one embodiment; FIG. 27 illustrates a computer system, in accordance with at least one embodiment; FIG. 28 illustrates a system, in accordance with at least one embodiment; FIG. 29 illustrates an exemplary integrated circuit, in accordance with at least one embodiment; FIG. 30 illustrates a computing system, according to at least one embodiment; FIG. 31 illustrates an APU, in accordance with at least one embodiment; FIG. 32 illustrates a CPU, in accordance with at least one embodiment; FIG. 33 illustrates an exemplary accelerator integration slice, in accordance with at least one embodiment; FIGS. 34A-34B illu