AMD ROCm 6.0 Released - Adds Support for Radeon PRO W7800 & RX 7900 GRE GPUs

Published by

teaser
Building on our previously announced support of the AMD Radeon RX 7900 XT, XTX and Radeon PRO W7900 GPUs with AMD ROCm 5.7 and PyTorch, we are now expanding our client-based ML Development offering, both from the hardware and software side with AMD ROCm 6.0. Firstly, AI researchers and ML engineers can now also develop on Radeon PRO W7800 and on Radeon RX 7900 GRE GPUs. With support for such a broad product portfolio, AMD is helping the AI community to get access to desktop graphics cards at even more price points and at different performance levels.

With ROCm 6.0, AI researchers and ML engineers can now develop on AMD Radeon PRO W7800 and Radeon RX 7900 GRE desktop GPUs, in addition to previously added support for other GPUs based on the advanced AMD RDNA 3 architecture — AMD Radeon PRO W7900, Radeon RX 7900 XTX and Radeon RX 7900 XT GPUs.

With support for a broad portfolio of hardware offerings, AMD is empowering the AI community to access powerful GPUs at a variety of price points and performance levels to accelerate their AI workloads.

AMD also announced that ROCm 6.0 now supports ONNX Runtime, allowing users to perform inference on a wider range of source data on local AMD hardware, as well adding  INT8 via MIGraphX — AMD’s own graph inference engine —  to the available data types (including FP32 and FP16). AMD will continue its ongoing effort to make AI more accessible to developers and researchers with expanded hardware and capabilities support to be announced over time.

To learn more about AMD ROCm 6.0, check out AMD’s blog here.

Amd_visual_4



Share this content
Twitter Facebook Reddit WhatsApp Email Print