AMD Radeon Vega Frontier Edition vs Nvidia Titan Xp

Published by

Click here to post a comment for AMD Radeon Vega Frontier Edition vs Nvidia Titan Xp on our message forum
https://forums.guru3d.com/data/avatars/m/230/230335.jpg
AMD's Radeon Vega Frontier Edition has a 300 Watt TDP and is a Workstation card with 1000+ EUR price. On the the other side if we compare it with GTX 1080 Ti which has a 250 Watt TDP, it is faster than AMD Vega FE and costs around 750 EUR at the moment. Who will buy this card when they can buy much cheaper GTX 1080 Ti, which is faster than AMD Vega FE too?!
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I think I am missing something here. Or? AMD's OpenGL performance is as bad as always due to driver + Cinebench is really not reliable OpenGL performance benchmark:[spoiler]http://www.fudzilla.com/images/stories/2017/Reviews/GeForce_GTX_1080_Ti/cinebench-r15-gtx-970-1070-1080-1080-ti.png[/spoiler] As for specviewperf... Anyone who went into their DB to compare it with previous best pro grade cards from AMD... Those 3 results are: +23% performance; +58% performance; +72.5% performance. I think this information has value as long as one does not try to compare it with nVidia's cards as each architecture has different strengths and weaknesses. But comparing it to last AMD's generation can tell that there are improvements, and can tell approximate ranges.
data/avatar/default/avatar23.webp
As for specviewperf... Anyone who went into their DB to compare it with previous best pro grade cards from AMD... Those 3 results are: +23% performance; +58% performance; +72.5% performance. I think this information has value as long as one does not try to compare it with nVidia's cards as each architecture has different strengths and weaknesses. But comparing it to last AMD's generation can tell that there are improvements, and can tell approximate ranges.
With limited scope of benchmarks available, its how it should be done. That is if we want to predict new GPUs performance. It's AMD itself who deliberately adds to confusion by benching Vega in pro apps against NV gaming card under NV gaming drivers
https://forums.guru3d.com/data/avatars/m/169/169351.jpg
Out of curiosity I just ran the cinebench opgengl benchmark myself and to be honest, this benchmark is such bull****, I keep getting inconsistent results.... Lowest result I've had is 86fps and the highest result I've had is 114fps. I've run it 9 times, 5 times I rerun the benchmark while the program was open and the first result was 107fps, each successive test resulted in lower fps each time. So I closed the program, opened it again for the next 4 tests and got 113fps, then 114fps then 110fps and then 114fps again.... https://puu.sh/wuQqn/004f4df8ec.png Obviously I'm not using a pro/workstation card but even so these results vary way to much to be worthwhile/credible/indicative of the actual performance of the card...
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Try setting power management to High Performance before any benchmarks. This one in particular doesn't use that much CPU while running, which makes the aggressive Intel clock-down to kick in while the bench is in progress, resulting in inconsistencies. That said, this is my result: http://dl.wavetrex.eu/2017/cbr15ogl.png i7-6800K @ 4.1Ghz , High-Perf power plan Zotac GTX 1080 AMP @ bios factory oc (Which makes the results in the article complete bs...)
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
You expected for AMD to leapfrog Nvidia? 1 year later they can win over 1080 and that's good, considering the new tech developed for the card. She's still late af though.
I hoped it would, like we all did. AMD not being able to compete doesn't drive NV to push even better products or drive prices down. A 300w card that's only faster than a 180w card. You know it's not going to be cheap either so we really need real competition.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
The point is that you now have a choice. 1080ti or Vega with similar performance so just buy the cheap one. I have a 1080ti I'd swap for Vega if it's got more vram though. I used 10gb in ROTR today. After 12 solid hours of DiRT4 without a break strangely I also had about 10gb of vram used but I think the unpatched game had a leak. Looks like AMD went with 8gb.............urgh I can already think of 8 games already out that I could overfill that with.
Being able to fill the memory and actually getting more performance from additional memory are two very different things. Take GTA5 for example in regards to PC ram (yes i know its not GPU ram). It can fill up close to 10GB of ram, and yet can do fine with under 4GB https://mygaming.co.za/news/wp-content/uploads/2016/06/GTA-V-4K.jpg
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Being able to fill the memory and actually getting more performance from additional memory are two very different things.
This. Beat me to it.
data/avatar/default/avatar38.webp
Doesn't say what driver version they're using. A big thing to keep in mind is the Nvidia product is mature and so are the drivers. VEGA is a heavily updated architecture with new features, it may take time for drivers to fully utilise the potential.
Well, first working silicon was shown 6 month ago if I recall correctly, that's plenty of time (new GPU are released almost every year) and AFAIK Vega is still based on the GCN ISA.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
Being able to fill the memory and actually getting more performance from additional memory are two very different things. Take GTA5 for example in regards to PC ram (yes i know its not GPU ram). It can fill up close to 10GB of ram, and yet can do fine with under 4GB
That's system ram not VRAM. Not really comparable. Quite a few new games that have high VRAM requirements with Max settings. Not enough VRAM or poor streaming engine for textures will cause lots of stuttering/frame drops. Black ops 3 comes to mind, 6gb 980ti fps fluctuates constantly 80-142~ fps due to paging out VRAM. With 1080Ti, usage sits at 10gb with zero FPS drops Other games like BF series stream in game assets better with noticeably better draw distances for objects and textures. IN other games where you can disable texture streaming such as UE3/4 games, disabled provides smoother gameplay and less visual streaming annoyances. VRAM usage in KF2 for example hits 8gb
data/avatar/default/avatar10.webp
Another hype bites the dust.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
That's system ram not VRAM. Not really comparable.
It's exactly comparable, as they are used the same way: RAM. I never said that additional memory would never bring performance improvements, which seems to be what you took out of what i stated. I simply stated that just because a game CAN fill up the space, does not automatically mean that less amount of ram would bring performance loss. No matter what way you want to spin that, it's true and is fact.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
It's exactly comparable, as they are used the same way: RAM. I never said that additional memory would never bring performance improvements, which seems to be what you took out of what i stated. I simply stated that just because a game CAN fill up the space, does not automatically mean that less amount of ram would bring performance loss. No matter what way you want to spin that, it's true and is fact.
You just said the polar opposite. In my example less VRAM brings performance loss.. I'm not making a blanket statement by saying all games, but quite a few new games it applies to as well. What i said does not give extra performance but ensures there is no loss of performance. System RAM loads games assets not texture's. That's the difference.
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
At this point I don't know how Vega can't beat GP102 when it was released a year ago. Professional Volta cards will show up in Q3-4 and Vega is still "coming"
data/avatar/default/avatar07.webp
So wait, they put a workstation card against a gaming card and it won in workstation tests and lost in gaming tests. okaaaayy.
So the Titan XP is a gaming card? okaaaayy.:banana:
https://forums.guru3d.com/data/avatars/m/247/247520.jpg
What I would like to know is how fast and accurately it would crunch a SetiAtHome task and does it have full double precision for MilkywayAtHome tasks?
data/avatar/default/avatar31.webp
What I would like to know is how fast and accurately it would crunch a SetiAtHome task and does it have full double precision for MilkywayAtHome tasks?
Note that the upcoming AMD Vega GPUs do not support native double-precision floating point and Error Correcting Memory, which is a deal breaker for most (but not all) HPC workloads.
https://www.forbes.com/sites/moorinsights/2017/06/26/isc17-hpc-embraces-diversity-as-amd-arm-up-the-ante-vs-intel-ibm-and-nvidia/2/#118e62ad414a
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
In my example less VRAM brings performance loss.. I'm not making a blanket statement by saying all games, but quite a few new games it applies to as well. What i said does not give extra performance but ensures there is no loss of performance. System RAM loads games assets not texture's. That's the difference.
Technically, you're mostly right, but there are exceptions. DX12, Vulkan, and I think DX11 are all capable of accessing system RAM for VRAM purposes if necessary. There's also stuff like HSA, though that tends to be more targeted toward IGPs rather than dGPUs. Generally though, 4GB is enough for most 1080p gamers. I once played Paragon at 1080p with max detail, and despite the game running smoothly, it kept crashing due to VRAM errors. So, I turned down texture settings and the game looked exactly the same, and it didn't crash anymore. I'm guessing the extra texture details were for those who have 2K or 4K displays. That being said, I figure 8GB is sufficient for 2K gamers and 12GB is sufficient for 4K. EDIT: I should remind myself too that my GPU (to my knowledge) doesn't compress data like Nvidia and AMD cards newer than mine, which suggests 4GB is even more doable for 1080p for better models than mine. As a bit of trivia, in Linux you can actually use your GPUs VRAM as regular system RAM, albeit, it's much slower (but sure as hell beats writing to disk!).
data/avatar/default/avatar14.webp
Seriously Cinebench? With those numbers I'm glad I got a 1080, 8 months ago!
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
So Microsoft back peddled and officially supports Ryzen on Win 8? What board was used? Asus does not offer any AM4 boards with win 8 drivers. All can say is odd. Why wouldn't they test it on Win 10? EDIT: the Cinebench was run on Win 8