AMD irritated how NVIDIA applied AMD MI300X benchmarks in their charts - releases its own

Published by

Click here to post a comment for AMD irritated how NVIDIA applied AMD MI300X benchmarks in their charts - releases its own on our message forum
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
AMD: how dare Nvidia use cherry picked benchmarks against our own cherry picked benchmarks LOL
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Krizby:

AMD: how dare Nvidia use cherry picked benchmarks against our own cherry picked benchmarks LOL
:):p - The thief shouts β€œCatch the thief!” louder than everyone else...
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Spidermans pointing at each other meme. Now be good technical marketing teams and go back to your desks at AMD and Nvidia and try to do something useful. Break is over and you need to get back into class.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
We need a cage match between MI300X and H100, to settle the score.
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
Horus-Anhur:

We need a cage match between MI300X and H100, to settle the score.
Winner is determined by cryptocurrency mining speed.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
aufkrawall2:

Winner is determined by cryptocurrency mining speed.
The prize is 100000 bitchcoins. πŸ˜€
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
AMD and Nvidia race for the bigger dick πŸ˜€
https://forums.guru3d.com/data/avatars/m/90/90667.jpg
The next Wrestlemania GPU main event!!!
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I don't quite understand why Nvidia optimises its hardware for FP8 if the industry prefers FP16, according to the article. How does that promote Nvidia to its vast success in the AI market, if they seemingly offer the wrong product for the job?
data/avatar/default/avatar30.webp
Kaarme:

I don't quite understand why Nvidia optimises its hardware for FP8 if the industry prefers FP16, according to the article. How does that promote Nvidia to its vast success in the AI market, if they seemingly offer the wrong product for the job?
It might be cheaper to invent a new standard in order to beat the competition, instead of delivering performance that would actually beat the competition in the already established standard. Intel AVX512 and AVX10 is another example of changing a standard to win instead of competing in AVX512 performance. A Vega 64 is faster at FP16 half precision calculations then a 3070, but because AMD is not big enough to force FP16 calculations as a new standard, it was never used for anything in gaming.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
AI d*ck measuring contest. Meanwhile, gamers get screwed. Oh well, such is life.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Horus-Anhur:

We need a cage match between MI300X and H100, to settle the score.
aufkrawall2:

Winner is determined by cryptocurrency mining speed.
Horus-Anhur:

The prize is 100000 bitchcoins. πŸ˜€
moab600:

The next Wrestlemania GPU main event!!!
Hahahaha I love you guys !
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
k3vst3r:

After reading this, Nvidia method is inherently flawed? it's not even 90% accurate for FP32 workloads to INT8 conversion, why would anyone want use Nvidia H100 card for precise scientific work.
int8 offers a big speed up, so if you can sacrifice some precision, its worth it, and for stuff like image processing, its worth the tradeoff in many cases. The PTQ/QAT optimizations, just enable you to get away with less precision , opening more applicable workloads, its not really meant to replace all of it, the h100 is still very good at fp16/ fp32/fp64 afterall.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
k3vst3r:

After reading this, Nvidia method is inherently flawed? it's not even 90% accurate for FP32 workloads to INT8 conversion, why would anyone want use Nvidia H100 card for precise scientific work.
Depends on the workflow there are tasks you need bigger sample size and accuracy is not that important how much would you even notice if shadows on a game where randomly +-5 pixels off ?
data/avatar/default/avatar19.webp
September 2022 Intel NVIDIA and Arm Team-up on a FP8 Format for AI (servethehome.com)
NVIDIA is already using the FP8 in its H100 Transformer Engine and that helped power its latest MLPerf Inference v2.1 results. Intel for its part says it plans to support FP8 in not just Habana Gaudi products, as we covered in Intel Habana Gaudi2 Launched for Lower-Cost AI Training. It also says it will support the format in future CPUs and GPUs. Arm says it expects to add FP8 support to the Armv9 ISA as part of Armv9.5-A in 2023.
data/avatar/default/avatar04.webp
Horus-Anhur:

We need a cage match between MI300X and H100, to settle the score.
Your cage match may have just concluded with this week's MLPerf training/inferencing benchmark results. Not sure I'm surprised given AMD's marketing claims last year.
https://forums.guru3d.com/data/avatars/m/275/275921.jpg
Size matters