US-12628649-B1 - Integrated cooling assembly with upper and lower channels and co-packaged optics
Abstract
Embodiments herein provide for fluidic cooling assemblies embedded within a device package and related manufacturing methods. In one embodiment, an integrated cooling assembly comprises a semiconductor device and a cold plate attached to a backside of the semiconductor device. The cold plate comprises an upper portion disposed vertically adjacent to the backside of the semiconductor device. The cold plate further comprises a lower portion disposed between the upper portion of the cold plate and the backside of the semiconductor device. The upper portion comprises upper coolant channels defined by upper cavity sidewalls. The lower portion comprises lower coolant channels defined by lower cavity sidewalls.
Inventors
- Belgacem Haba
- Ron Zhang
- Rasit Onur Topaloglu
Assignees
- ADEIA SEMICONDUCTOR BONDING TECHNOLOGIES INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20250430
Claims (16)
- 1 . An integrated cooling assembly comprising: a first semiconductor device; a second semiconductor device; a third semiconductor device adjacent to the second semiconductor device; and a cold plate attached to a backside of the first semiconductor device, wherein: the cold plate comprises an upper portion disposed vertically adjacent to the backside of the first semiconductor device; the cold plate comprises a lower portion disposed between the upper portion of the cold plate and the backside of the first semiconductor device; the upper portion comprises upper coolant channels defined by upper cavity sidewalls; the lower portion comprises lower coolant channels defined by lower cavity sidewalls; the cold plate is disposed between the first semiconductor device and the second semiconductor device; the cold plate comprises a channel cover disposed on the upper portion of the cold plate to fluidly seal the upper coolant channels; the second and third semiconductor devices are attached to an upper surface of the channel cover; and the second semiconductor device is communicatively connected to the third semiconductor device by an interconnect disposed in the channel cover.
- 2 . The integrated cooling assembly of claim 1 , wherein the upper coolant channels are separated from the lower coolant channels.
- 3 . The integrated cooling assembly of claim 1 , wherein: the cold plate comprises a lower side facing the backside of the first semiconductor device and an upper side opposite the lower side; the lower side is directly bonded to the backside of the first semiconductor device; and the upper side is directly bonded to a backside of the second semiconductor device.
- 4 . The integrated cooling assembly of claim 1 , wherein the upper coolant channels are separated from the lower coolant channels by the upper cavity sidewalls and the lower cavity sidewalls.
- 5 . The integrated cooling assembly of claim 1 , wherein the upper coolant channels are separated from the lower coolant channels by a dielectric material disposed between the upper portion and the lower portion.
- 6 . The integrated cooling assembly of claim 5 , wherein a thickness of the dielectric material between the upper portion and the lower portion is 1 μm-10 μm.
- 7 . The integrated cooling assembly of claim 5 , wherein the upper portion is attached to the lower portion by the dielectric material using direct dielectric bonds or direct hybrid bonds.
- 8 . The integrated cooling assembly of claim 1 , wherein the first semiconductor device comprises at least one of a central processing unit (CPU), a graphical processing unit (GPU), a neural processing unit (NPU), a tensor processing unit (TPU), and high-bandwidth memory (HBM).
- 9 . The integrated cooling assembly of claim 1 , wherein the second semiconductor device comprises at least one of a central processing unit (CPU), a graphical processing unit (GPU), a neural processing unit (NPU), a tensor processing unit (TPU), and high-bandwidth memory (HBM).
- 10 . The integrated cooling assembly of claim 1 , wherein the first semiconductor device is different than the second semiconductor device.
- 11 . The integrated cooling assembly of claim 1 , wherein the second semiconductor device is a photonic integrated circuit (PIC).
- 12 . The integrated cooling assembly of claim 1 , wherein the channel cover is attached to the upper portion by direct dielectric bonds, direct hybrid bonds, or adhesive.
- 13 . The integrated cooling assembly of claim 1 , further comprising a fourth semiconductor device, wherein the fourth third semiconductor device is disposed on the second semiconductor device.
- 14 . The integrated cooling assembly of claim 1 , wherein: the second semiconductor device is disposed vertically adjacent to first upper coolant channels; the third semiconductor device is disposed vertically adjacent to second upper coolant channels; and the first upper coolant channels are separated from the second upper coolant channels by dielectric material disposed therebetween.
- 15 . The integrated cooling assembly of claim 14 , wherein a thickness of the dielectric material between the first upper channels and the second upper channels is 1 μm-10 μm.
- 16 . The integrated cooling assembly of claim 1 , wherein the third semiconductor device is an electronic integrated circuit (EIC).
Description
CROSS REFERENCE TO RELATED APPLICATION This application claims the benefit of U.S. Provisional Patent Application No. 63/768,379 filed Mar. 7, 2025, which is hereby incorporated by reference herein in its entirety. FIELD The present disclosure relates to advanced packaging for microelectronic devices, and in particular, cooling systems for device packages and methods of manufacturing the same. BACKGROUND Energy consumption poses a critical challenge for the future of large-scale computing as the world's computing energy requirements are rising at a rate that most would consider unsustainable. Some models predict that the information, communication and technology (ICT) ecosystem could exceed 20% of global electricity use by 2030, with direct electrical consumption by large-scale computing centers accounting for more than one-third of that energy usage. A significant portion of the energy used by such large-scale computing centers is devoted to cooling, since even small increases in undesirable operating temperatures can negatively impact the performance of microprocessors, memory devices, and other electronic components. While some of this energy is expended to operate the cooling systems that are directly cooling the chips (e.g., heat spreaders, heat pipes, etc.), energy consumption/costs for indirect cooling can also be quite staggering. Indirect cooling energy costs include, for example, cooling or air conditioning of data center buildings. Data center buildings can house thousands, to tens of thousands or more, of high performance chips in server racks, and each of those high performance chips is a heat source. An uncontrolled ambient temperature in a data center will adversely affect the performance of the individual chips, and the data center system performance as a whole. Thermal dissipation in high-power density chips (semiconductor devices/die) is also a critical challenge as improvements in chip performance, e.g., through increased gate or transistor density due to advanced processing nodes, evolution of multi-core microprocessors, etc., have resulted in increased power density and a corresponding increase in thermal flux that contributes to elevated chip temperatures. Higher density of transistors also increases the length of metal wiring on the chips, which generates its own additional thermal flux due to Joule heating of these wires due to higher currents. These elevated temperatures are undesirable as they can degrade the chip's operating performance, efficiency, reliability, and amount of remaining life. Cooling systems used to maintain the chip at a desired operating temperature typically remove heat using one or more heat dissipation devices, e.g., thermal spreaders, heat pipes, cold plates, liquid cooled heat pipe systems, thermal-electric coolers, heat sinks, etc. One or more thermal interface materials (TIMs), such as, for example, thermal paste, thermal adhesive, or thermal gap filler, may be used to facilitate heat transfer between the surfaces of a chip and heat dissipation device(s). A thermal interface material is any material that is inserted between two components to enhance the thermal coupling therebetween. Unfortunately, the combined thermal resistance of (i) the thermal resistance of interfacial boundary regions between one or more TIMs and the chip and/or the heat dissipation device(s), and (ii) the thermal resistance of a thermal interface material itself can inhibit heat transfer from the chip to the heat dissipation devices, undesirably reducing the cooling efficiency of the cooling system. Generally speaking, there are multiple components between the heat dissipating sources (i.e., active circuitry) in the chips and the heat dissipation devices, each of which contributes to the system thermal resistance cumulatively along the heat transfer paths and raises chip junction temperatures from the ambient. Such cooling systems can suffer from reduced cooling efficiency due to the design and manufacture of system components. Additionally, communication between electronic components on a server rack, and between server racks themselves, is generally provided by copper wires. Unfortunately, these copper wires suffer from problems such as heat dissipation (due to their intrinsic resistance to current), communication signal attenuation, and bandwidth loss. As data demands grow, copper-based Serializer/Deserializer (SerDes) circuitry, which connects switching application-specific integrated circuits (ASICs) to pluggable transceivers, may be used to enable faster transmission, but faster ASICs require improved copper connections through more channels or higher speeds. However, as link density and bandwidth increase, a significant portion of system power and cost is consumed by driving signals from the ASICs to optical interconnects at the edge of the rack. The size limitations of ASIC ball grid array (BGA) packages (e.g. due to warpage concerns), require higher SerDes speeds to support