SK Hynix and Nvidia Collaborate on Innovative GPU Redesign Integrating HBM4 Memory Directly on Processors

Published by

teaser
SK Hynix and Nvidia are engaging in a technical collaboration to redesign GPU architecture through the integration of HBM4 memory with processing cores. This technical initiative aims to alter the existing framework of logic and memory device interconnections, as well as the semiconductor industry's manufacturing processes.

The recruitment of design professionals with expertise in logic semiconductors, such as CPUs and GPUs, marks SK Hynix's commitment to stacking HBM4 memory onto processors. This technique diverges from traditional interconnection and manufacturing practices, potentially leading to foundational changes in the foundry industry's standard procedures.

Currently, HBM stacks integrate multiple memory devices, typically between eight and sixteen, with a logic layer that serves as a central hub. Positioned on interposers, these stacks connect to CPUs or GPUs via a 1024-bit interface. SK Hynix's strategy involves direct placement of HBM4 stacks onto processor dies, thereby eliminating the use of interposers. This concept parallels AMD's 3D V-Cache technology in its direct CPU die integration, though HBM4 is anticipated to provide higher capacities, despite being slower and more cost-effective. Collaboration is underway between SK Hynix and fabless entities, including Nvidia, focusing on the design methodologies for HBM4 integration. Utilizing the wafer bonding technology from TSMC, SK Hynix aims to unify their HBM4 memory with logic chips in a single chip design, necessitating intricate synchronization of memory and logic semiconductors within one die.

The proposed HBM4 memory interface will be 2048-bit, which presents complexities and higher costs for interposer design. A significant challenge in directly connecting memory and logic is thermal management. High power consumption and thermal output from both logic processors, like Nvidia's H100, and HBM memory, may require advanced cooling solutions, including liquid and submersion cooling systems.

Professor Kim Jung-ho from KAIST's Department of Electrical and Electronics indicates that overcoming these thermal issues might span several product generations, which is crucial for enabling HBM and GPUs to operate efficiently without interposers.

78879789

Direct integration impacts not only chip design but also manufacturing processes. While utilizing the same process technology for DRAM and logic in a single facility could enhance performance, it would also increase memory production costs, making this approach currently impractical. However, the trend suggests a future where memory and logic semiconductor integration becomes more prevalent. An industry expert anticipates that within ten years, the semiconductor industry may undergo a significant transformation, potentially diminishing the clear delineation between memory and logic semiconductors.

Source: joongang

Share this content
Twitter Facebook Reddit WhatsApp Email Print