AMD Announces 7nm Radeon RX Radeon Vega Instinct with 32 GB HBM2

Published by

Click here to post a comment for AMD Announces 7nm Radeon RX Radeon Vega Instinct with 32 GB HBM2 on our message forum
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
AMD will bring 7nm Vega towards gaming as well, no timeframe is mentioned though, the 7nm
Is there something missing at the end of this line @Hilbert Hagedoorn ? Not sure So this is going to be a very, very boring year GPU wise. 🙁
data/avatar/default/avatar37.webp
Sad, AMD also wants a slice of the AI cake and we lose as there is no competition in the market. its basically dead with 2 year old architectures being sold for higher than MSRP prices...
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
HardwareCaps:

Sad, AMD also wants a slice of the AI cake and we lose as there is no competition in the market. its basically dead with 2 year old architectures being sold for higher than MSRP prices...
A bit. Mining allowed delay in new releases. But it did not cause slowdown in actual development done by AMD. That slowness comes from some other source. Good thing is that even while slide with timeline is pretty generic, it shows "Next-Gen" before 2020 and after. navi looks like 6~8 months away. That's our gaming refresh. And those AI optimizations/Fast Interconnect, that may help in future too.
data/avatar/default/avatar22.webp
Fox2232:

A bit. Mining allowed delay in new releases. But it did not cause slowdown in actual development done by AMD. That slowness comes from some other source. Good thing is that even while slide with timeline is pretty generic, it shows "Next-Gen" before 2020 and after. navi looks like 6~8 months away. That's our gaming refresh. And those AI optimizations/Fast Interconnect, that may help in future too.
there's no "slowness" Vega simply failed to deliver, HBM was too expensive and too late, performance was not enough, availability was ridiculous. if Vega was to threaten even a little the Nvidia market share, we would have seen Nvidia geforce launch NOW. exactly what happened in the CPU market, Ryzen proved to be really good and intel preemptively released Coffee lake at MSRP prices to try and stop the bleeding
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
HardwareCaps:

there's no "slowness" Vega simply failed to deliver, HBM was too expensive and too late, performance was not enough, availability was ridiculous. if Vega was to threaten even a little the Nvidia market share, we would have seen Nvidia geforce launch NOW. exactly what happened in the CPU market, Ryzen proved to be really good and intel preemptively released Coffee lake at MSRP prices to try and stop the bleeding
Vega was unfinished, half dead GPU, where you have like 20% of transistors just sitting there. AMD could as well take Polaris, scale it to same amount of transistors. That would give 96 instead of 64 CUs and 14nm would take care of achievable clock. It would be clocked lower to fit in same TDP, but would definitely deliver quite more graphical and compute horsepower than Vega64. Big reason why 1080Ti runs circles around Vega 64, is that any part of architecture which did no get performance per clock improvement since Fury X, scaled-up only via clock. And nVidia's jump in clock from 980Ti to 1080Ti was quite better + GPU beefed up on number of functional blocks.
data/avatar/default/avatar35.webp
Fox2232:

Vega was unfinished, half dead GPU, where you have like 20% of transistors just sitting there. AMD could as well take Polaris, scale it to same amount of transistors. That would give 96 instead of 64 CUs and 14nm would take care of achievable clock. It would be clocked lower to fit in same TDP, but would definitely deliver quite more graphical and compute horsepower than Vega64. Big reason why 1080Ti runs circles around Vega 64, is that any part of architecture which did no get performance per clock improvement since Fury X, scaled-up only via clock. And nVidia's jump in clock from 980Ti to 1080Ti was quite better + GPU beefed up on number of functional blocks.
not really, Polaris was optimized for a certain die size you can't just add more transistors like that to scale performance, also AMD has huge issues with bandwidth this is why they went for HBM although it was still premature and expensive, at the performance level of Vega you need more bandwidth than GDDR5/X. Nvidia did a much smarter thing and solved that issue with smart hardware and software
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
HardwareCaps:

not really, Polaris was optimized for a certain die size you can't just add more transistors like that to scale performance, also AMD has huge issues with bandwidth this is why they went for HBM although it was still premature and expensive, at the performance level of Vega you need more bandwidth than GDDR5/X. Nvidia did a much smarter thing and solved that issue with smart hardware and software
Memory Bandwidth was not exactly issue of Fury X. HBM enabled Fury X to compete through TDP. If Fury X had GDDR5, it would be clocked to 950MHz on core as there would be no spare power (heat) left in 300W. And even then, best performance upgrade for Fury X is not OC, but modding bios to increase GPU power limit from 270W to 330W or downvolting a bit.
https://forums.guru3d.com/data/avatars/m/190/190660.jpg
Miners gonna eat this one up. Current Vega is already the most profitable GPU to mine with.
data/avatar/default/avatar01.webp
Vega was also using a fab designed for mobile uses. That may have been the biggest problem with them.
data/avatar/default/avatar25.webp
Fox2232:

Memory Bandwidth was not exactly issue of Fury X. HBM enabled Fury X to compete through TDP. If Fury X had GDDR5, it would be clocked to 950MHz on core as there would be no spare power (heat) left in 300W. And even then, best performance upgrade for Fury X is not OC, but modding bios to increase GPU power limit from 270W to 330W or downvolting a bit.
power savings are there but small, it is about bandwidth. Nvidia doesn't really need HBM while AMD has to for high resolutions.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
HardwareCaps:

not really, Polaris was optimized for a certain die size you can't just add more transistors like that to scale performance
... Uh you definitely can? That's literally already what Vega is - the graphics core basically is Polaris just scaled up, 64 CU's over 4 compute engines compared to 32 over 4. It just had a bunch of other stuff they tacked on, for example HBCC, which takes up a bunch of die space but doesn't necessarily translate into direct performance gains in video games. In fact in the anandtech review they ask the AMD engineers about it:
At a high level, Vega 10’s compute core is configured almost exactly like Fiji. This means we’re looking at 64 CUs spread out over 4 shader engines. Or as AMD is now calling them, compute engines. Each compute engine in turn is further allocated a portion of Vega 10’s graphics resources, amounting to one geometry engine and rasterizer bundle at the front end, and 16 ROPs (or rather 4 actual ROP units with a 4 pix/clock throughput rate) at the back end. Not assigned to any compute engine, but closely aligned with the compute engines is the command processor frontend, which like Fiji before it, is a single command processor paired with 4 ACEs and another 2 Hardware Schedulers. Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work. In other words, the usual engineering trade-offs apply, with AMD’s engineers focusing on addressing things like HBCC and rasterization as opposed to doing the replumbing necessary for additional compute engines in Vega 10.[/Quote] They basically did nothing on the shader/compute engine side to help scaling - they simply just added more of what they already had with Polaris then what to focus on other features. Mostly ones that added die size but no additional gaming performance. Hence Fox2232 20% dormant transistor remark. AMD's main problem is that with both Fury and Vega they had to have those architectures fill the roles of both gaming/compute and workstation. Nvidia has been dividing it's architectures with slight variances to optimize for each particular role. If AMD had the money to spin multiple SKUs for each role their gaming architectures probably wouldn't utilize HBM because it just eats into margins. Nvidia's memory compression isn't that much better than what's found in Vega/Polaris and could easily be made up by higher clockspeed or just a bigger bus - AMD had no problem shoving a 512bit bus on the 290x. As for the power: https://www.extremetech.com/wp-content/uploads/2016/02/NV-HB.png https://images.anandtech.com/doci/9390/HBM_10_Energy.png
By AMD’s own metrics, HBM delivers better than 3x the bandwidth per watt of GDDR5 thanks to the simpler bus and lower operating voltage of 1.3v. Given that AMD opted to spend some of their gains on increasing memory bandwidth as opposed to just power savings, the final power savings aren’t 3X, but by AMD’s estimates the amount of power they’re spending on HBM is around 15-20W, which has saved R9 Fury X around 20-30W of power relative to R9 290X. These are savings that AMD can simply keep, or as in the case of R9 Fury X, spend some of them on giving the card more power headroom for higher performance.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
HardwareCaps:

power savings are there but small, it is about bandwidth. Nvidia doesn't really need HBM while AMD has to for high resolutions.
The difference between GDDR and HBM isn't about bandwidth, yet. AMD uses HBM due to much lower power consumption, because the GPU is pushed to the brim they need the lower power consumption of HBM. Vega is awesome if you undervolt it slightly though.
data/avatar/default/avatar10.webp
Silva:

The difference between GDDR and HBM isn't about bandwidth, yet. AMD uses HBM due to much lower power consumption, because the GPU is pushed to the brim they need the lower power consumption of HBM. Vega is awesome if you undervolt it slightly though.
the power savings are not big, HBM is far harder to get and is more expensive and eventually Vega still pulls more power than the Pascal counterpart. vega has failed
data/avatar/default/avatar03.webp
Denial:

... Uh you definitely can? That's literally already what Vega is - the graphics core basically is Polaris just scaled up, 64 CU's over 4 compute engines compared to 32 over 4. It just had a bunch of other stuff they tacked on, for example HBCC, which takes up a bunch of die space but doesn't necessarily translate into direct performance gains in video games. In fact in the anandtech review they ask the AMD engineers about it: They basically did nothing on the shader/compute engine side to help scaling - they simply just added more of what they already had with Polaris then what to focus on other features. Mostly ones that added die size but no additional gaming performance. Hence Fox2232 20% dormant transistor remark. AMD's main problem is that with both Fury and Vega they had to have those architectures fill the roles of both gaming/compute and workstation. Nvidia has been dividing it's architectures with slight variances to optimize for each particular role. If AMD had the money to spin multiple SKUs for each role their gaming architectures probably wouldn't utilize HBM because it just eats into margins. Nvidia's memory compression isn't that much better than what's found in Vega/Polaris and could easily be made up by higher clockspeed or just a bigger bus - AMD had no problem shoving a 512bit bus on the 290x. As for the power: https://www.extremetech.com/wp-content/uploads/2016/02/NV-HB.png https://images.anandtech.com/doci/9390/HBM_10_Energy.png
bus =\= bandwidth ofc they are related but clocks are the issue.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
HardwareCaps:

the power savings are not big, HBM is far harder to get and is more expensive and eventually Vega still pulls more power than the Pascal counterpart. vega has failed
Lol yeah - 3x the bandwidth per watt is "not big".. on the Fury X it's a 10% power reduction and 50%+ increase in bandwidth over the 290x. "Not big" I don't even know why I bother responding to your posts.
data/avatar/default/avatar14.webp
I pray that these gpu's will be immensely fast, much faster than nvidia anticipates, forcing nvidia to actually make an effort, and use their big gv100 chip (rather than the midrange gv104 chip) in the 1180 model at a reasonable price, as it was with gtx 580 and earlier gens.
data/avatar/default/avatar19.webp
Denial:

Lol yeah - 3x the bandwidth per watt is "not big".. on the Fury X it's a 10% power reduction and 40% increase in bandwidth over the 290x. "Not big" I don't even know why I bother responding to your posts.
you said bandwidth is not why AMD went for HBM.... anyways 10% considering the fact it costs almost double, has very slim availability and was mass produced way after 5X makes no sense for me. AMD already lost the efficiency war,they should have gone for 5X designs with more robust cooling and tuned down clocks
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
HardwareCaps:

Sad, AMD also wants a slice of the AI cake and we lose as there is no competition in the market. its basically dead with 2 year old architectures being sold for higher than MSRP prices...
HardwareCaps:

there's no "slowness" Vega simply failed to deliver, HBM was too expensive and too late, performance was not enough, availability was ridiculous. if Vega was to threaten even a little the Nvidia market share, we would have seen Nvidia geforce launch NOW. exactly what happened in the CPU market, Ryzen proved to be really good and intel preemptively released Coffee lake at MSRP prices to try and stop the bleeding
Do you not see the hypocrisy in your statement? AMD wants a slice of the AI cake because that's pretty much the only thing Vega is actually good at. Vega is only disappointing for 3D purposes (like gamers and workstations), but it's good for everyone else. So when it comes to AI, Vega did not fail to deliver, it is competitive with plenty of performance, and the only market-related struggle they have is their incompatibility with CUDA or tensor-core-specific software. I understand gaming is the #1 focus on Guru3D, but seriously, sometimes people are so blinded by gaming results as though that's the only thing that dictates actual performance. If that were the case, architectures like ARM and PPC shouldn't exist.
data/avatar/default/avatar07.webp
schmidtbag:

Do you not see the hypocrisy in your statement? AMD wants a slice of the AI cake because that's pretty much the only thing Vega is actually good at. Vega is only disappointing for 3D purposes (like gamers and workstations), but it's good for everyone else. So when it comes to AI, Vega did not fail to deliver, it is competitive with plenty of performance, and the only market-related struggle they have is their incompatibility with CUDA or tensor-core-specific software. I understand gaming is the #1 focus on Guru3D, but seriously, sometimes people are so blinded by gaming results as though that's the only thing that dictates actual performance. If that were the case, architectures like ARM and PPC shouldn't exist.
WHAT? I talked about us consumers, as you said the vast majority of users here are for gaming/workstation. not AI. I don't care about Vega finding another market to shine, I care about having competition in the gaming market and advancements in the gaming market
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
HardwareCaps:

WHAT? I talked about us consumers, as you as the vast majority of users here are for gaming/workstation. not AI.
Right... and do you not realize this 32GB Vega has nothing to do with us consumers?