AMD Radeon Pro Duo Launches April 26th

Published by

Click here to post a comment for AMD Radeon Pro Duo Launches April 26th on our message forum
https://forums.guru3d.com/data/avatars/m/116/116362.jpg
Why would they still bother with this when two months later we get Polaris which will smash the old arch anyways..
https://forums.guru3d.com/data/avatars/m/47/47825.jpg
Why would they still bother with this when two months later we get Polaris which will smash the old arch anyways..
Hopefully smashes the old arch.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
Core clocks up to 1000MHz i think, so its basically 2x Nanos for $1500, having a giraffe. :P
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I thought the Polaris (and Pascal) consumer cards being released this summer aren't the top of the top models. In that case this card wouldn't have a particular competitor. Although I still don't know who would bother to buy an expensive card from the previous generation anymore.
data/avatar/default/avatar09.webp
Why would they still bother with this when two months later we get Polaris which will smash the old arch anyways..
Not sure about that to be honest, as the Polaris 10 is the midrange chip in the upcoming lineup and it's the Vega that is designed to be the highest end version, where the GPU size might be around the same ballpark as the current Fiji GPU, but made using the 14nm + finfet process, so way faster for the same amount of power use. Radeon pro duo still "cheats" by using 2 GPU's working together and relying on crossfire scaling from 1 to 2 GPU's ( can be 80~90% scaling in some titles ), so I expect big pascal GP100 or Vega to match this but using a single GPU in either case. It's already known that Big Pascal GP100, released in a tesla variant for the HPC market needs a lot of dual precision math capability (~ 5 teraflop ), is a chip packing 15.3 billion transistors.......The current maxwell in the Geforce 980TI / Titan is a chip with 8 billion, while Fiji is 8.9 billion. So nothing using a single GPU will beat the Radeon pro duo this year, but once gaming versions for both GP100 and Vega are out, it's over for this card as the performance it can achieve using 2 GPU's on one card, can be done using a single one, use less power and be cheaper.
data/avatar/default/avatar10.webp
I thought the Polaris (and Pascal) consumer cards being released this summer aren't the top of the top models. In that case this card wouldn't have a particular competitor. Although I still don't know who would bother to buy an expensive card from the previous generation anymore.
Given that there are so many ( Cough ) DX12 games to chose from, I wouldn't call any current card outdated to be honest..... 😀 A pair of these Radeon pro duo allows some nice Quad crossfire insanity which besides being extremely expensive ( 3000$ ) is still made up of 4 Fiji GPU's working together which when combined totals 2 terabyte /sec of memory bandwidth and 32 teraflop / sec of single precision math which is what is used in games, 1024 texture units and 256 rops, which is a crazy high amount of combined GPU power that makes any game running at 4K resolution a joke as far as GPU load is concerned......It isn't enough to stress the GPU's, period.....LOL. So when you add that GPU firepower, and that it only takes up 2 PCI-e slots while leaving the rest alone for a nice sound card, or PCI-e SSD I don't think the owner will have to compromise their gaming anywhere at any setting and anytime soon.....:P
https://forums.guru3d.com/data/avatars/m/178/178348.jpg
Nice, I swapped out my old HD6990 (2 x HD6970's on 1 card) for my Fury X a little while back. I never had any problem with crossfire. (except Watch Dogs) I'd have no hesitation getting another twin GPU AMD card.
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
I see they still advertise these dual cards as having "8GB vram", isn't it the same as with nvidia's SLI where they only use the 4GB or is it different with xfire? And if so, wouldn't all this horsepower be gimped by only having 4GB vram or is this not an issue with HBM? Either way, it's an impressive beast, I expected that they would at least gimp the clocks a bit.
data/avatar/default/avatar11.webp
I see they still advertise these dual cards as having "8GB vram", isn't it the same as with nvidia's SLI where they only use the 4GB or is it different with xfire? And if so, wouldn't all this horsepower be gimped by only having 4GB vram or is this not an issue with HBM? Either way, it's an impressive beast, I expected that they would at least gimp the clocks a bit.
That's the hard thing to figure out, since standard AFR in DX11 needs exact copies of all data for both GPU's, so the 8 GB on this card acts like a 4 GB card in practical terms in that situation, but in DX 12 there is a feature that allows specific GPU's to process specific workloads and exact copies of all data no longer need to be kept in memory for each GPU then. We'll still be stuck in DX11 land for a long time though.....:(.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
That's the hard thing to figure out, since standard AFR in DX11 needs exact copies of all data for both GPU's, so the 8 GB on this card acts like a 4 GB card in practical terms in that situation, but in DX 12 there is a feature that allows specific GPU's to process specific workloads and exact copies of all data no longer need to be kept in memory for each GPU then. We'll still be stuck in DX11 land for a long time though.....:(.
This is true but in most scenes I imagine both GPU's will have the exact same thing in memory anyway, even when they are split. It's still better obviously, but lots of people think it will just double the free memory. Memory in use will nearly double as well.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
^ PCIe 3.0 x16 bandwidth is not sufficient for one GPU to use other GPU's vRAM as data storage for 'per frame' required textures anyway. At time we'll have sufficient bandwidth (like PCIe 5.0 x16) we'll in contrast have stupid high amount of vram like 16GB and it will be again used for overkill textures. (and it will in return kill benefit of higher PCIe bandwidth.)
data/avatar/default/avatar27.webp
This is true but in most scenes I imagine both GPU's will have the exact same thing in memory anyway, even when they are split. It's still better obviously, but lots of people think it will just double the free memory. Memory in use will nearly double as well.
^ PCIe 3.0 x16 bandwidth is not sufficient for one GPU to use other GPU's vRAM as data storage for 'per frame' required textures anyway. At time we'll have sufficient bandwidth (like PCIe 5.0 x16) we'll in contrast have stupid high amount of vram like 16GB and it will be again used for overkill textures. (and it will in return kill benefit of higher PCIe bandwidth.)
It's in either of these cases that I expect real time procedural effects to take over since we're getting to the point of having a crazy amount of single precision floating point power out of shaders overall, and that also saves quite a lot of video card memory since not as many textures are need to be stored there anymore...... One example that comes to mind is the damage modeling system used in star citizen, where under the previous system and this was just for a regular fighter sized craft, about 100 MB of textures needed to be stored in local video card memory, while with the new procedural system that dropped to just 8MB for the same ship. The savings are definitely there and the damage effect looked a lot better as an added bonus, but since for the most part we're playing console ports with a few extra graphics features for the PC release as long as they don't take too long to add them ( costs extra money after all ), most games will keep it simple and stick with what works.
data/avatar/default/avatar07.webp
I mean 16 teraflop of single precision floating point math power in this latest Radeon Pro Duo........That's 16 trillion operations per second, theoretical maximum.....Think about it. So if we take a 4K screen which totals almost 8.3 million pixels and we want 60 Fps, that comes to 498 million pixels and then an incredibly complex shader effect that is totally dependent of that 16 teraflop floating point math power is applied to all those pixels, and no other limitations are in play (which there are), then a shader ( or multiple shaders ) that requires 32 000 floating point calculations can be applied to each pixel and still sustain 60 Fps in the process. In short, we don't know how good we have it really......:)
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
I don't get it, why would AMD bother with this & why would anyone buy it, according to the graph it's only about 25% faster on average than the the 295x2 - lump that in together with the fact that it's not long till Polaris launches...well, I see it as pointless!
data/avatar/default/avatar25.webp
Nvidia had no answer for 295x2 now this....overkill
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
I see they still advertise these dual cards as having "8GB vram", isn't it the same as with nvidia's SLI where they only use the 4GB or is it different with xfire? And if so, wouldn't all this horsepower be gimped by only having 4GB vram or is this not an issue with HBM? Either way, it's an impressive beast, I expected that they would at least gimp the clocks a bit.
yes and no, as in SLI, it will use 4gb per GPU but with the rise of DX12 and some other trix, you can dispatch the work on the 2 GPU and so it will act like a single big GPU and so 8gb of vram... sadly the limit is when one of the GPU need 4.1gb of vram as an exemple... then you have a problem with the card the driver or the program trying to solve this (and so not doing what you expect from it). in game it doesn't happen so much but in pro scenario, sadly for this card, it happen a lot.
data/avatar/default/avatar20.webp
I don't get it, why would AMD bother with this & why would anyone buy it, according to the graph it's only about 25% faster on average than the the 295x2 - lump that in together with the fact that it's not long till Polaris launches...well, I see it as pointless!
Why not? They might just as well sell few more Fijis and ride the headlines with this halo product. AMD already incurred the cost from R&D activities connected with modest volume Fiji and never again to be used HBM1. Now just spend few $M for cooling and packaging and they have monster new graphics. If all goes well they might cover cooling and packaging :P Seriously it's a zero risk operation, just tweak latest few CF profiles and launch.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Why not? They might just as well sell few more Fijis and ride the headlines with this halo product. AMD already incurred the cost from R&D activities connected with modest volume Fiji and never again to be used HBM1. Now just spend few $M for cooling and packaging and they have monster new graphics. If all goes well they might cover cooling and packaging :P Seriously it's a zero risk operation, just tweak latest few CF profiles and launch.
Well, as you say they would get the halo product thing (for a while), but I think it's a bad purchase when Polaris & Pascal are coming out soon.
data/avatar/default/avatar32.webp
Well, as you say they would get the halo product thing (for a while), but I think it's a bad purchase when Polaris & Pascal are coming out soon.
You'd have a point if they were the high end versions of either one, but what we'll see first are the mid range versions with GP104 and Polaris 10, which the current speculation is that they might match to slightly beat the current high end cards while using a much smaller die ( 1/2 the size ), and also cut the power consumption in half ( 125~150 watt cards basically ). To beat a dual GPU Fiji card, it takes big boy GP 100 on NVidia's end, or Vega on AMD's side but it looks like neither will be with us in a gaming version until sometime early neat year......These are chips with 15+ billion transistors easily, using near the max of the PCI-e express power specification (300 watts ), and using die's just as large as the current high end chips ( ~600mm^) The heavy artillery in short....:)
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
You'd have a point if they were the high end versions of either one, but what we'll see first are the mid range versions with GP104 and Polaris 10, which the current speculation is that they might match to slightly beat the current high end cards while using a much smaller die ( 1/2 the size ), and also cut the power consumption in half ( 125~150 watt cards basically ). To beat a dual GPU Fiji card, it takes big boy GP 100 on NVidia's end, or Vega on AMD's side but it looks like neither will be with us in a gaming version until sometime early neat year......These are chips with 15+ billion transistors easily, using near the max of the PCI-e express power specification (300 watts ), and using die's just as large as the current high end chips ( ~600mm^) The heavy artillery in short....:)
Although Crossfire or SLI Polaris or Pascal would make more sense than buying this dual card, unless you only really wanted to use just the one PCIe slot, in which case it's worth it.