AMD EPYC CPUs, AMD Radeon Instinct GPUs to power Cray Supercomputer

Published by

Click here to post a comment for AMD EPYC CPUs, AMD Radeon Instinct GPUs to power Cray Supercomputer on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Interesting, first with Intel, now with AMD. It's a good thing the US government is doing its part to make sure some competition remains, at least.
https://forums.guru3d.com/data/avatars/m/259/259804.jpg
Aye, but can you/it play Crisis 3 in 4k......?? πŸ˜‰)
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
No CUDA support. I'll pass.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Gomez Addams:

No CUDA support. I'll pass.
LOL I would have thought the $600m you needed to make your own would have been a bigger issue than lack of cuda support. Gamers commenting on server/supercomputer news is always funny to me. Got to have the one guy say something about Crysis (it hasn't been funny for a decade) and then many others comment like they are somehow going to actually own a supercomputer. πŸ˜‰ Anyhow 1.5 exaflops is an insane amount of compute power that I realize its like saying someone is worth a 100B it just doesn't compute to most of us. Having supercomputers with this kind of capabilities will certainly push science further along.
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
My post was not made in the context of gaming or being a gamer.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Gomez Addams:

My post was not made in the context of gaming or being a gamer.
do you care to enlighten us about your reasoning then?!
https://forums.guru3d.com/data/avatars/m/106/106401.jpg
Look how tables turned, 2 most powerfull supercomputers are based on AMD(CPU/GPU) and Intel(CPU/GPU), don't you wonder why no Nvidia's GPU's?, IMO he gonna sell his leather jacket soon. https://i.postimg.cc/Yq4dwc5b/frontier.jpg *Image from Anandtech.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
HWgeek:

don't you wonder why no Nvidia's GPU's?
Perhaps it would be interesting to see an all Nvidia one as well. But can Tegra handle running multiple powerful GPUs?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
As much as I'm happy for AMD's success and repeatedly remind people that AMD's GPUs are actually very good for compute workloads, it seems people here are getting a little bit carried away with their success in this situation. Supercomputers are a game of leapfrog. There's always one country, university, or corporation that releases the next best thing, using some company's latest-gen hardware, and then a few months later someone else does the same thing on a competing platform. Trust me, soon enough there will be a server with Nvidia hardware ranking #1. And rest assured, they won't retain that position.
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Ah but a short decade ago...[URL='https://www.nvidia.com/object/pr_oakridge_093009.html']the good ole' days [/URL] Mind you...nVidia's Fermi supercomputer at ORNL already had AMD CPU's in it...:)
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
HWgeek:

Look how tables turned, 2 most powerfull supercomputers are based on AMD(CPU/GPU) and Intel(CPU/GPU), don't you wonder why no Nvidia's GPU's?, IMO he gonna sell his leather jacket soon. https://i.postimg.cc/Yq4dwc5b/frontier.jpg *Image from Anandtech.
You're comparing unreleased super computers that aren't even due to be out for another 2 years, minimum, to nvidias latest and greatest offering (by nvidia i mean a super computer that uses nvidia) from 2018, which is currently the fastest super computer out there, with no regard to the fact that....the two others aren't released and could have competition in 2021 by other, unannounced super computers, could utilize nvidia, etc.? Sorry your point seems to be a lack of a point. Come back in 2021/22 when those super computers are up and running and then see if no one is using or planning on using nvidia beyond the Summit super computer, until then, it's a pre-mature "victory" based off a lack of evidence rather then actual historical evidence. Just for clarification, i am happy that super computers are starting to use AMD products, even both in the same system, very happy, but this notion that because the only KNOWN 2 next super computers do not have nvidia in them, that somehow means nvidia has any issue whatsoever, financially, process, performance or otherwise, is just nonsense. We know that EuroHPC JU is planning a 2022/23 exascale supercomputer, and we know they have at least 1.12 billion dollars to do so (whereas this articles stated one is at 600 million), and we don't know what it'll be using. We know that there's at least one more exascale super computer in the USA planned for the same 2021/22 slot, the el capitan, and we don't know what it'll be using There are articles out there claiming there will be 10 exascale super computers by 2023, and given the information we know, that doesn't sound terribly unlikely. So again, this whole "well 2 of the 10 don't use nvidia, therefore, nvidia has a problem!" is pure and utter nonsense. If none of the 10 have nvidia, then you can say there's an issue, then you can claim nvidia lost out big time.
https://forums.guru3d.com/data/avatars/m/106/106401.jpg
You didn't get my point, for long time Nvidia's GPU's were the obvious choice for those Supercomputers and you see that AMD and Intel's started to get their place, on Self-driving vehicles sector all saw that there is an option to develop better Asics for their own needs with "little" money and 2~3 years, so it seems that NV gonna have harder life for near future.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
the cray supercomputer is built for a specific type of task that amd gpu's are better suited for. its only the most powerful in terms of that task.
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
Very interesting built , nice to see AMD chosen for such " Epyc " ( pun intended ) supercomputer πŸ˜‰
https://forums.guru3d.com/data/avatars/m/106/106401.jpg
[youtube=3IPlhjv8NiE] After Watching this about Milan and the possibility that it will include 15 Chiplets and maybe SMT4, I think [my Imagination] I Have an Idea where AMD is going with it's future(Maybe Custom design?) HPC EPYC design on 7nm+: 1)Each CPU chiplet will be 6C/24T to save space/power while giving similar or better then 8c/16t performance. 2)Adding 4 custom Instinct GPU chiplets. 3)Adding 2 custom AI accelerator [Asics] chiplets. 4)1 I/O chiplet with HBM memory stack. So the final EPYC Milan(?) can be HPC beast with: 48C/192T Zen CPU cores. 4 custom Instinct GPUs. 2 AI accelerator Asics. 1 I/O Chiplet with HBM 3D staking . https://i.postimg.cc/NFRJKbYM/Possible-Future-AMD-EPYC.png EDIT: I see that there was already great article on such HPC APU design: https://www.overclock.net/forum/225-...lops-200w.html So after reading some of it I changed my illustration: https://i.postimg.cc/Qtt93ZWk/Possible-Future-AMD-EPYC-New.png And 8 Milans could be installed in Cray’s Shasta 1U with Direct Liquid Cooling: https://i.imgur.com/O1PvgiI.jpg https://www.anandtech.com/show/13616/managing-16-rome-cpus-in-1u-crays-shasta-direct-liquid-cooling Do you think that such chiplet design could be beneficial for HPC clients?
data/avatar/default/avatar16.webp
HWgeek:

You didn't get my point, for long time Nvidia's GPU's were the obvious choice for those Supercomputers and you see that AMD and Intel's started to get their place, on Self-driving vehicles sector all saw that there is an option to develop better Asics for their own needs with "little" money and 2~3 years, so it seems that NV gonna have harder life for near future.
No bro. Doesn't work like that According to the the USA exascale project strategy the DOE has put in writing the requirement that the project MUST NOT rely on the single best and the greatest hw, and instead it has to be built on several different architectures. Is why you're seeing AMD GPU's there, being heavily subsidized by what seems to be a significant budget ramp up for exascale. Undoubtedly fueled by China's rising aspirations in the field and the whole bruhaha between the two countries. That's why all of them (NV, intel and AMD) are getting a piece of the cake, regardless who has the best hw. Intel already fukd up once in delivery, that's why the need for redundancy and several contractors delivering different architectures.