AMD Vega To get 20K Units Released at launch and new Zen+ chatter

Published by

Click here to post a comment for AMD Vega To get 20K Units Released at launch and new Zen+ chatter on our message forum
data/avatar/default/avatar12.webp
The same happened during the launch of Fury X, IIRC. It's not surprising. Given the anticipation and the hype surrounding this launch, I expect all Vega units to be sold out/booked within 6 hours from launch/pre-order. Guess reviewers may need to share review samples like they did during Fury X launch.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
looks like ill have to start saving for the refresh hopefully the memory prices will have come down by then and compatibility will be better. As for vega i expect the launch to be a lot like the fury x was.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
looks like ill have to start saving for the refresh hopefully the memory prices will have come down by then and compatibility will be better. As for vega i expect the launch to be a lot like the fury x was.
I just might upgrade my 1800x come ryzen refresh XD just because the platform supports and I can. If they get +15% IPC and higher clocks for sure.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I don't understand why the HBM2 would be such a bottleneck. It's not new technology anymore, per se, it's just development from the original HBM. Didn't they learn anything from the first one? If this is indeed true, it's far too easy to say that Nvidia made once again a much more sensible decision by using GDDR5X, which apparently has no supply problems despite being nothing but a further development from GDDR5, just like HBM->HBM2.
data/avatar/default/avatar26.webp
Can anyone explain why AMD insists on HBM? Is say GDDR5X 512bit not enough? Fury did not seem to benefiy from HBM much?
https://forums.guru3d.com/data/avatars/m/128/128096.jpg
Can anyone explain why AMD insists on HBM? Is say GDDR5X 512bit not enough? Fury did not seem to benefiy from HBM much?
I have a feeling that it slots into their long term Heterogeneous computing concept somewhere down the line, and for now it's the introduction phase on the high end who can bear the costs in the first place. Question is if AMD will last that long...
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
Can anyone explain why AMD insists on HBM? Is say GDDR5X 512bit not enough? Fury did not seem to benefiy from HBM much?
Fury X had a problem with so little ROPs,only 64,not 4GB HBM.HBM release/refresh data faster than GDDR5/x. Vega needs again many ROPs,over 96-128,and needs the high bandwidth memory(obviously needs high power for ze chip).
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
I hope this is not true, because if it is, then 20K is basically nothing...
https://forums.guru3d.com/data/avatars/m/262/262564.jpg
I hope this is not true, because if it is, then 20K is basically nothing...
You gotta start somewhere when you innovate. Without AMD taking risks, NVIDIA and INTEL would still be bending everyone over. If they can deliver 1080ti performance without water cooler for less than $650 I'll pre-order one of the first 20k. I don't even need it, there's no games that take advantage of it now, but I'm tired of the NVIDIA\INTEL monopoly, the inflated prices and built-in obsolescence. Their greed has stifled innovation for years.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
You gotta start somewhere when you innovate. Without AMD taking risks, NVIDIA and INTEL would still be bending everyone over. If they can deliver 1080ti performance without water cooler for less than $650 I'll pre-order one of the first 20k. I don't even need it, there's no games that take advantage of it now, but I'm tired of the NVIDIA\INTEL monopoly, the inflated prices and built-in obsolescence. Their greed has stifled innovation for years.
What exactly is AMD doing that's so innovative with Vega? And how has Nvidia stifled innovation for years?
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
This is cool and all, but I just want an am4 apu.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
What exactly is AMD doing that's so innovative with Vega? And how has Nvidia stifled innovation for years?
Right? They've had the top performing products for years now, but somehow that's stifling innovation? Meanwhile AMD has only managed to tread water in a couple of specific product lines. But somehow AMD are innovators? :wanker: Zen is pretty great though. It seems like they are pushing Intel to step up their game. I just wish they could do the same on the GPU side.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Well AMD does innovate, I just don't consider HBM2 "innovative" at this point. For starters Nvidia has been shipping cards with it for 6 months. Putting it on consumer cards isn't innovative, it's arguably just a bad decision but one AMD doesn't seem to have a choice in. None of the other innovative things on Vega would limit supply. So I'm not sure what the point of his first sentence is. The idea that Nvidia is stifling innovation is just insane. Their innovation might be self-serving but it pushes the entire industry forward. Freesync is awesome, but AMD had no plan for it until GSync came. AMD's open software library is a response to Gameworks. Their shadowplay equivalent (I forget the name) is a response to shadowplay. The Vega cache controller is nice and might be faster but Pascal already supports unified memory through CUDA. Nvidia already has packed math. They already have tiled rasterization. Then you have all the stuff they push with virtualization, AI, Deep Learning, Ray Tracing, etc. Granted AMD has efforts in all this too - but Nvidia is still developing stuff there and pushing the ball forward. Idk which company has more "impact" in terms of innovation, but saying that Nvidia stifles it is ridiculous.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Fury X had a problem with so little ROPs,only 64,not 4GB HBM.HBM release/refresh data faster than GDDR5/x. Vega needs again many ROPs,over 96-128,and needs the high bandwidth memory(obviously needs high power for ze chip).
480 has half the ROPs of my card, but it seems to be doing just fine, regularly beating the 390. Although that being said, I actually do think it could use more.
data/avatar/default/avatar30.webp
Well AMD does innovate, I just don't consider HBM2 "innovative" at this point. For starters Nvidia has been shipping cards with it for 6 months. Putting it on consumer cards isn't innovative, it's arguably just a bad decision but one AMD doesn't seem to have a choice in. None of the other innovative things on Vega would limit supply. So I'm not sure what the point of his first sentence is. The idea that Nvidia is stifling innovation is just insane. Their innovation might be self-serving but it pushes the entire industry forward. Freesync is awesome, but AMD had no plan for it until GSync came. AMD's open software library is a response to Gameworks. Their shadowplay equivalent (I forget the name) is a response to shadowplay. The Vega cache controller is nice and might be faster but Pascal already supports unified memory through CUDA. Nvidia already has packed math. They already have tiled rasterization. Then you have all the stuff they push with virtualization, AI, Deep Learning, Ray Tracing, etc. Granted AMD has efforts in all this too - but Nvidia is still developing stuff there and pushing the ball forward. Idk which company has more "impact" in terms of innovation, but saying that Nvidia stifles it is ridiculous.
Saying that Nvidia stifles innovation might seem like a little too much but when all the things u list are put behind an inflated price tag, is doesn't sound that far fetched. Another thing that might contribute to that uneasiness about Nvidia is the fact they lagged behind in async, dx12 and vulkan while being the "bigger" company, instead (ab)using that power to manually tune drivers for each game (which works, but is not innovative at all). I don't hate Nvidia, but personally dislike when they put effort in software or "secondary" things as a way to "lock" gamers/academia/etc into their "not cheap at all" hardware, instead of focusing on the GPU itself.
data/avatar/default/avatar15.webp
In low shader, but high geometry loads, more ROPs are better to push raw pixels to the screen, especially at 1080 or 1440p. If we compare just Fury X vs 980 Ti, you can see those trends - shader heavy games, Fury takes lead, otherwise, 980 Ti kills it in pure pixel pushing. At 4K, if you lower shader performance or features, ROPs will help at lot with high frequency. I have yet to see a lower ROP card beat out higher ROP (of the same performance segment) at the same resolution unless it shader (compute) heavy.
data/avatar/default/avatar18.webp
AMD is betting on the long term. HBM is the future because of packaging and power savings. the ability to have shareable memory on die is huge. imagine 4+ GPUs on one card. with vulkan and directX12 handling mgpu much better AMD could have an advantage down the road. that is how they're innovating. the current nvidia iteration (1080) isn't much different then Fermi (GTX 480) if you think about it.
https://forums.guru3d.com/data/avatars/m/262/262564.jpg
Saying that Nvidia stifles innovation might seem like a little too much but when all the things u list are put behind an inflated price tag, is doesn't sound that far fetched. I don't hate Nvidia, but personally dislike when they put effort in software or "secondary" things as a way to "lock" gamers/academia/etc into their "not cheap at all" hardware, instead of focusing on the GPU itself.
I definitely could have worded my post better, but ... This. Specifically, gimping GPUs to and extreme extent IMO, to maximize profits due to a monopoly, keeps the least common denominator very low which in turn, prevents developers from targeting a more capable baseline. The same goes for Intel and cpu cores. To say that their monopolies stifle innovation does not necessarily mean they themselves are not innovative.