AMD ROCm Solution Enables Native Execution of NVIDIA CUDA Binaries on Radeon GPUs

Published by

Click here to post a comment for AMD ROCm Solution Enables Native Execution of NVIDIA CUDA Binaries on Radeon GPUs on our message forum
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
This is really, really cool! It could open up HUGE doors if it is pursued farther. It will be interesting to see how AMD's MI300X compares with Nvidia's Hopper chips on the same code. It could also allow developers to utilize AMD's cards if it really works well. One thing that makes no sense to me is this statement, "Using the ZLUDA library, developers can effectively bypass CUDA." Developers still have to generate CUDA binaries (see the first sentence above) so CUDA is not being bypassed at all since the SDK is required for that. If anything, CUDA is being embraced even further and I think that is very good thing.
https://forums.guru3d.com/data/avatars/m/275/275921.jpg
if you can't beat them, join them
data/avatar/default/avatar23.webp
The CUDA compiler basically compiles to an intermediate output format called PTX. This format is then converted to an assembly format called SASS. It is this SASS that the CUDA driver on the GPU converts in the actual microcode that the GPU can execute natively. I assume that ZLUDA takes the SASS and/or the PTX code from the CUDA executable and passes it through a front end pretending to be the CUDA driver that does the translation into the AMD native microcode that can be executed on AMD GPUs. The performance loss shown against some of the OpenCL benchmarks probably comes from those PTX/SASS instructions that don't have a matching microcode instruction in AMD's microcode and thus have to be emulated with small subroutines that require more instructions to perform the same operation. AMD had already embraced CUDA in a sense when they defined HIP, since HIP source code looked like CUDA source code with the prefix cuda replaced with the hip prefix, so that their CUDA to HIP converter could simply translate from CUDA to HIP for those functions that HIP supported, making the conversion process simple and even allowing AMD to compile HIP code for NVIDIA GPUs (by simply doing the inverse translation and feeding the result of that to the NVCC compiler from NVIDIA. Of course not having to convert source code, but to do this at the driver level makes life a lot easier for programmers, since now you can just maintain a single code base (where with HIP you'd have to convince CUDA programmers to use HIP instead of CUDA or maintain CUDA and HIP code bases) and let the low level driver take care of the heavy lifting. There will of course always be parts of CUDA that will be hard to convert by a tool like ZLUDA, especially the hardware level control stuff that is specific for NVIDIA GPUs that has no equivalent on AMDs GPUs, but for most applications that use CUDA simply to implement high performance parallel algorithms without this deep and specific hardware control, this should work without issues (albeit with some performance differences)
data/avatar/default/avatar05.webp
It seems that AMD has actually abandoned the project. Which means that development of this tool might not be ongoing from this point onward..
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
Crazy Joe:

It seems that AMD has actually abandoned the project. Which means that development of this tool might not be ongoing from this point onward..
They have stopped further development but made it all open source. That means development can continue if someone wants to. I suspect some customer with deep pockets and an interest in buying AMD hardware instead of Nvidia will pick up the ball and run with it. The thing is, Nvidia's product list of computation-oriented GPUs is much more extensive than AMD's. ETA Nvidia instead of AMD
https://forums.guru3d.com/data/avatars/m/263/263841.jpg
Gomez Addams:

I suspect some customer with deep pockets and an interest in buying AMD hardware instead of Nvidia will pick up the ball and run with it.
I wouldn't be surprised if there is more than one company that wants to avoid dealing with nvidia
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
Gomez Addams:

The thing is, AMD's product list of computation-oriented GPUs is much more extensive than AMD's.
Internal competition?
https://forums.guru3d.com/data/avatars/m/189/189980.jpg
This has potential. As before mentioned, some people will be willing to develop further and avoid Nvidia. Those who are doing heavy computational biology, need a cheaper platform to work on their projects. Some universities have their HPC clusters running R with R Studio leveraging CUDA for heavyweight tasks. Nothing stops some students work on a alternative way to create a potential game changer for bioinformatics or the like.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Funny thing is often open source beat solutions come from a "random"guy that has no real skin in the game that just decided he want to try for the hell of it ... But man Nvidia locking down cuda really paid dividends for em .... Ps: I am against locking such things but companies are companies I guess ... When they are ahead they lock things down when they try to catch up they make things open source trying to incentivise people to use their tool .
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
Venix:

Funny thing is often open source beat solutions come from a "random"guy that has no real skin in the game that just decided he want to try for the hell of it ... But man Nvidia locking down cuda really paid dividends for em .... Ps: I am against locking such things but companies are companies I guess ... When they are ahead they lock things down when they try to catch up they make things open source trying to incentivise people to use their tool .
Yes, it definitely did. However, the recent decision on the lawsuit between Oracle and Google indicates that one can not copyright an API. That means AMD could make their own identical version of the CUDA API if they wanted to. Given the shock waves AI has sent through the industry it seems to me that AMD should pursue that. They could be a lot more compelling if they had a code-compatible solution with comparable performance at a lower price. A binary-compatible solution would be even better. I am somewhat surprised that a company(s) like Amazon or Tesla doesn't prod AMD to get into that more.
data/avatar/default/avatar16.webp
Gomez Addams:

Yes, it definitely did. However, the recent decision on the lawsuit between Oracle and Google indicates that one can not copyright an API. That means AMD could make their own identical version of the CUDA API if they wanted to. Given the shock waves AI has sent through the industry it seems to me that AMD should pursue that. They could be a lot more compelling if they had a code-compatible solution with comparable performance at a lower price. A binary-compatible solution would be even better. I am somewhat surprised that a company(s) like Amazon or Tesla doesn't prod AMD to get into that more.
This is what AMD did with HIP (which is part of ROCm). HIP uses the same names for functions that CUDA does, except it replaces the cuda prefix in those functions with hip. This means that they can easily convert CUDA based source code to HIP based source code through an automated conversion tool. Of course there are parts of CUDA that are device specific that have no equivalent HIP function (and device level code, which NVIDIA prefixes with cu i.s.o cuda won't convert either), so their is still a need for hand conversion for some CUDA source code, but compared to recoding things in OpenCL or even OneAPI this makes converting code to AMD GPUs at the source code level almost trivial.
data/avatar/default/avatar16.webp
Venix:

Funny thing is often open source beat solutions come from a "random"guy that has no real skin in the game that just decided he want to try for the hell of it ... But man Nvidia locking down cuda really paid dividends for em .... Ps: I am against locking such things but companies are companies I guess ... When they are ahead they lock things down when they try to catch up they make things open source trying to incentivise people to use their tool .
All public companies have to make money (it's what the share holders demand), they do that by making things and selling them. Making CUDA cost Nvidia a lot of money, it's not wrong that they want to sell it and make money off it. With a few exceptions if you open source something you still need an angle to make money off it. eg. Google gives a fair amount away but it's not really for free as if you use their stuff they get to sell your data. Something like freesync required the monitor manufacturers to spend money developing it as opposed to using an nvidia chip but they could make money selling freesync monitors. Hence for something like this you gotta ask who can make money off it, and the answer is AMD selling cards to use it, but they aren't willing to spend money developing it which is why they open sourced it in the first place. Hence it'll die as a serious product. A few guys fiddling with it for fun won't save it.