Radeon RX Vega Confirmed launching at SIGGRAPH

Published by

Click here to post a comment for Radeon RX Vega Confirmed launching at SIGGRAPH on our message forum
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Well lets hope they polish the Drivers before release.
data/avatar/default/avatar03.webp
LOL, a year too late and they still try spinning it like a superior gaming product. Sorry, but they have lost this round a year ago. HBM2 was a wrong bet.
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
I hope this VEGA is a success (I doubt it). Right now I'm with a 1440p monitor and a 970 ITX, so I need to upgrade asap. The reason I haven't upgraded is that I don't really have too much time to play (I play/finish 1 game every 2-3 months). I'm also waiting for some 4K 120Hz monitors that can handle Variable refresh rate (like VESA adaptive sync) and good 1080p scaling for the most heavy games. So this is the reason I'm not grabbing a 1080 ti. I'm not getting a GSync in the future, and I don't need ti performance today for 1440p
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
HBM was the worst thing AMD thought to bring to consumers. I hope the drivers can pull a 1080ti miracle or they're ****ed.
data/avatar/default/avatar40.webp
HBM was the worst thing AMD thought to bring to consumers. I hope the drivers can pull a 1080ti miracle or they're ****ed.
Funny how many consumers have been begging for it.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
LOL, a year too late and they still try spinning it like a superior gaming product. Sorry, but they have lost this round a year ago. HBM2 was a wrong bet.
thanks for the informative input technical expert of GPUs... LOL
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
thanks for the informative input technical expert of GPUs... LOL
The tech is certainly good and holds a number of advantages although my own knowledge about it is admittedly not very good either, but availability and price is a concern and that could hamper things even now with the GPU finally getting released. Will be interesting to see how this product matures from release, going by the "work" Vega released earlier as the Frontier edition the drivers at least for that could have done with some additional polish but with these models being more for gaming and probably cutting those mode switches and what not it might end up in better shape though overall performance in games might perhaps not differ but having the GPU driver not crash is a win too ha ha. 😀 (Granted while there were several such listed known issues they were also more situational or specific to certain conditions.)
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
LOL, a year too late and they still try spinning it like a superior gaming product. Sorry, but they have lost this round a year ago. HBM2 was a wrong bet.
Agreed, this should have been out before Christmas. I'm not sure who is willing to buy a product that sits between a year old 1070/1080 if we go by the FE results. the price difference between the two is not even that high anymore. unless they go and undercut the gtx 1070 price... HBM2 sounded lovely back when we still had plan GDDR5 now we got GDRR5x hitting 12ghz on some of the models, so the bandswidth gap is tiny
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Well, it would have been too much if AMD also hit the jackpot in the GPU department, they certainly did with their CPUs. As said, HBM2 was a wrong bet at this point in time, it's the future, sure, but not yet.
data/avatar/default/avatar17.webp
Won't the HBM2 make this card a miners dream? Also if priced purely based on MH/s vs cost vs power etc. Won't it be out of stock consistently? If so it's a success from a sales standpoint regardless of its gaming application.
data/avatar/default/avatar17.webp
Well, no one knows the performance of the GPUs so IDK what you people are talking about... Also, Who cares if it doesn't beat the overpriced Nvidia stuff? I wouldn't call the 480-580 GPUs a flop..
data/avatar/default/avatar29.webp
thanks for the informative input technical expert of GPUs... LOL
Ill give a real one. There are a few things people are overlooking in their analysis of the current GPU landscape that i will point out. First, there are only two current high end GPUs with the features that GK110 and tahiti PRO had, which are the features required by many of the HPC applications that GPUs are used for. GP100 and V100 are the ONLY GPUs from AMD or Nvidia which have full ECC support, meaning ECC from the L1 cache and registers to the full memory. Vega, GP102 and GP104 GPUs all lack this feature. Whats interesting is that Nvidia used to sell their most complex GPU with these features disabled as a Geforce(780 and 780ti), but it was otherwise an identical GPU. AMD also used to support full ECC in first generation GCN and also sold them as Radeons(7970 i think). All later GCN architectures(to my knowledge) have not had ECC L1 or GPU structures, and have only supported ECC for the RAM. This is true of Vega as well. IF AMD is going to compete with GP100 or V100 in real HPC, and not just machine learning, they will need a chip with full ECC. There are applications that require full ECC and currently the only options are V100, GP100, specialized accelerators like PEZY or CPUs. No current GPUs except for GP100 and V100 have real double precision support either. So, if you need double precision your only options are GP100 or V100, or using something very outdated like GK210, GK110 or first generation GCN for DP and full ECC, or later GCNs for DP with partial ECC. This current Vega is basically somewhere in between GP100 and GP104, which makes it kind of a weird chip. Its a relatively small FP32 and FP16 focused GPU without full ECC(but with the capability of ECC RAM). It has HBM2 and a focus on single and half precision for deep learning applications like GP100 and V100, yet it lacks the full ECC support it would need for it to be a true HPC chip. What it looks like to me is that AMD knew they couldnt make huge chips like GP100 or V100, but both of those are pushing the limits of current lithography and they do EVERYTHING well. Let me elaborate. Having FP64, FP32, FP16, tensor cores and full ECC means it can do mission critical pre exascale HPC simulations along with machine learning and everything in between, including the newest models for simulations that require AI to interpret afterward. Vega seems purpose built for large scalable AI and machine learning applications, although they lack tensor cores. We will have to see how much advantage those give V100. However, this doesnt make Vega useless like a lot of naysayers are claiming. Given the price of V100 at $13,000 even if the tensor cores are great, you can buy a few MI25s for that money. I would expect to see Vega take market share from Tesla P40s in MI25 form, Quadro P6000 and P5000 if they make a workstation Vega with certifications and ECC. If it ends up being cheaper than those, and it likely will, then it can make huge inroads to machine learning installations. That will be especially true when coupled with Epyc, since they both talk over Infinity Fabric. As for Radeon Vegas, if they manage to optimize their software, the HBM should keep all the cores fed and it should compete well with GP102. In theory it should outperform it. They make need to revise certain aspects of the architecture for its next iteration if its not making use of its theoretical FLOPS for some reason. AMD will have to make a bigger chip with the same TDP if they want to compete with Nvidia's current big chips. GP102 does not fit into that category, and i have pointed out that GM200 was not really the successor to GK110, but a totally new place in Nvidia's lineup, which has become the 102 chip. AMD stopped making big GPUs at the same time with Tahiti PRO. What about games with current Vega? I think its premature to say its a bad or outdated architecture. If they allow the CPU and GPU to talk to each other over Infinity Fabric and optimize that process, along with good driver support that actually makes use of all the FLOPS and memory bandwidth from the HBM2, it should be very competitive.
data/avatar/default/avatar10.webp
AMD Plays Catch-Up in Deep Learning with New GPUs and Open Source Strategy June 28, 2017
While all of these GPUs are focused on the same application set, they cut across multiple architectures. The MI25 is built on the new “Vega” architecture, while the MI8 and MI6 are based on the older “Fuji” and “Polaris” platforms, respectively. The top-of-the-line MI25 is built for large-scale training and inferencing applications, while the MI8 and MI6 devices are geared mostly for inferencing. AMD says they are also suitable for HPC workloads, but the lower precision limits the application set principally to some seismic and genomics codes. According to an unnamed source manning the AMD booth at ISC, they are planning to deliver 64-bit-capable Radeon GPUs in the next go-around, presumably to serve a broader array of HPC applications. For comparison’s sake, NVIDIA’s P100 delivers 21.2 teraflops of FP16 and 10.6 teraflops of FP32. So from a raw flops perspective, the new MI25 compares rather favorably. However, once NVIDIA starts shipping the Volta-class V100 GPU later this year, its 120 teraflops delivered by the new Tensor Cores will blow that comparison out of the water. A major difference is that AMD is apparently building specialized accelerators for deep learning inference and training, as well as HPC applications, while NVIDIA has abandoned this approach with the Volta generation. The V100 is an all-in-one device that can be used across these three application buckets. It remains to be seen which approach will be preferred by users. The bigger difference is on the software side for GPU computing. AMD says it plans to keep everything in its deep learning/HPC stack as open source. That starts with the Radeon Open Compute platform, aka ROCm. It includes things such as GPU drivers, a C/C++ compilers for heterogeneous computing, and the HIP CUDA conversion tool. OpenCl and Python are also supported. New to ROCm is MIOpen, a GPU-accelerated library that encompasses a broad array of deep learning functions. AMD plans to add support for Caffe, TensorFlow and Torch in the near future. Although everything here is open source, the breadth of support and functionality is a fraction of what is currently available to CUDA users. As a consequence, the chipmaker has its work cut out for it to capture deep learning customers.
https://www.top500.org/news/amd-plays-catch-up-in-deep-learning-with-open-source-strategy/
data/avatar/default/avatar35.webp
Hah thats pretty much what i said! The article says that someone from AMD said theyll be doing a GPU with FP64 hmmm? That's interesting news to me, and it implies that this iteration of Vega is their medium sized GPU, not their largest. If they really plan to take on mission critical workloads and compete with V100 or its successor, they will need lower TDPs and full ECC. It says "Radeon GPUs" not APUs specifically. I wonder if that's accurate, or if the anonymous source is referencing AMDs hypothetical exascale APU.
data/avatar/default/avatar25.webp
yea what were they thinks about HBM, and traveling by aeroplane, nonsense; I'll stick to trains and boats. When is Volta? 6 Months after RX Vega crushes GTX1080Ti.
data/avatar/default/avatar17.webp
I understand that Vega supports AMD's Infinity Fabric. What does that mean exactly? I am hearing that would allow multi-GPU to act as one card with scalability like their CPUs. Could somebody clear this up? Seems like its something that nobody is talking about much.
data/avatar/default/avatar15.webp
I understand that Vega supports AMD's Infinity Fabric. What does that mean exactly? I am hearing that would allow multi-GPU to act as one card with scalability like their CPUs. Could somebody clear this up? Seems like its something that nobody is talking about much.
Infinity Fabric would work like NVLink. Ive been talking about it. Vega also has a built in NVMe controller so it can directly address a 2TB SSD for video editing or a burst buffer type setup.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Agreed, this should have been out before Christmas. I'm not sure who is willing to buy a product that sits between a year old 1070/1080 if we go by the FE results. the price difference between the two is not even that high anymore. unless they go and undercut the gtx 1070 price... HBM2 sounded lovely back when we still had plan GDDR5 now we got GDRR5x hitting 12ghz on some of the models, so the bandswidth gap is tiny
Dunno, someone who's not willing to pay the price of a 1070 which atm sits priced way too high for its performance target?
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
Doomsday for nVidia confirmed. Vega will dominate all!
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
HBM was the worst thing AMD thought to bring to consumers.
It's cool that you think better technology is a bad thing.