AMD Greenland Vega10 Silicon To Have 4096 Stream Processors?
Some interesting info surfaced on the web the past day or so. As you guys know, AMD is to release GPUs based on Polaris, however a recent AMD roadmap shows Vega and Navi Architectures. Vega is the successor to Polaris with HBM2 memory to be launched in the 2017 time-frame. An employee on their linked-in page noted Vega to have 4096 shader processors.
It seems that after Polaris in 2017 VEGA will make an appearance, the name is tagged with HBM2 meaning that HBM2 likely will not make it onto Polaris. Vega is the brightest star in the constellation Lyra, the fifth brightest star in the night sky and the second brightest star in the northern celestial hemisphere, after Arcturus. Next in line in the 2018 timeframe we see Navi with the two keywords being scalability and Nextgen memory.
Now the juicy part, The LinkedIn profile of Yu Zheng (who is an R&D manager at AMD) (info was removed btw) shows "shader processor" (stream processor) count of Vega10 to be 4,096, have a peek:
Interestingly enough, that's the same shader processor number as Fiji (Radeon R9 Fury X). Since the info does not state what model GPU (mid-range/high-end/enthusiast) it'll be used for it all remains guess work in terms of performance. We do know that perf per watt will increase significantly over Polaris and that these GPUs will be fitted with HMB2 graphics memory.
Since we are on the topic, "Polaris" architecture based names are oozing out. A high-end chip will be called "Ellesmere" or Polaris10. There will be a mid-range GPU called "Baffin" or Polaris11. And then "Ellesmere" is rumored to get 36 GCN 4.0 compute units, which works out to 2,304 stream processors; and a 256-bit wide bus, indicative for GDDR5/GDDR5X with 8 GB memory amount.
It's going to be an interesting year.
Senior Member
Posts: 8093
Joined: 2014-09-27
I still can't see where the 680 numbers are

Senior Member
Posts: 3490
Joined: 2007-01-27

There are three results tables on the last page of the review article. The first is nvidia only, second is amd only and the third is the comparison chart.
The 680 is the second from the left in the third and final table. The article is messy in terms of formatting I agree, but it seems alright in terms of accuracy.
They also promised to do a review of the Kepler cards with large memory pools, which is exactly what I've been waiting for
Senior Member
Posts: 3490
Joined: 2007-01-27
Good comparison that one. Reason why I like g3d reviews most is that Hilbert disables settings that are favorable to one or the other. Like tressfx in first TR or nvidia hairworks in witcher 3. It gives a better picture how the gpu might work all in all imo.
And well would have nice to see what model of 290x was used in that test since the 970 was a galax one with 1165 clocks and 1320 boost out of the box vs stock 970 which boosts to 1170 from 1050. edit: I found which model of 290x it was. So it was the basic 1000mhz one.
I like hardocp too, their problem is the IQ settings are inconsistent, and they frequently make arbitrary comparisons; one review of the 390 is compared to a stock 970, another is compared both to stock 970 and oc 970. A third review might compare the 390 to two other 390s etc.
I also hate how he has this 'highesy playable settings thing, I understand fcat, I understand why they do it, it just makes it impossible to use their data to make comparisons.
I went through a couple G3D reviews and tried calculating the oc scaling factor for every game to try and get an idea of how it is on average.
Example: 390x @ 1150mhz nets 60 fps in a game. At stock a 390x does 55fpa
I want to know if 60/55=1150/1050, for maxwell it tends to scale very linearly with clocks in most games. Kepler too, by the way, overclocks 30% over stock ; I checked 780ti review to make sure.
Senior Member
Posts: 7412
Joined: 2006-09-24
I like hardocp too, their problem is the IQ settings are inconsistent, and they frequently make arbitrary comparisons; one review of the 390 is compared to a stock 970, another is compared both to stock 970 and oc 970. A third review might compare the 390 to two other 390s etc.
I also hate how he has this 'highesy playable settings thing, I understand fcat, I understand why they do it, it just makes it impossible to use their data to make comparisons.
I went through a couple G3D reviews and tried calculating the oc scaling factor for every game to try and get an idea of how it is on average.
Example: 390x @ 1150mhz nets 60 fps in a game. At stock a 390x does 55fpa
I want to know if 60/55=1150/1050, for maxwell it tends to scale very linearly with clocks in most games. Kepler too, by the way, overclocks 30% over stock ; I checked 780ti review to make sure.
In some reviews hardocp has started to include apples to apples comparison in the end with all cards on same settings.
Near stock 390x nets some 95 fps from TR and PC Devil one clocks to 1225 and gets 118 fps on same settings so it gains 25%. Now best case scenario for 390x would be 1250 on core imo. So it can clock rather high. 390's reach that 1200 too. But stock clocks for 390x is 1050 and 390 has 1000 stock clocks. Hilbert got in TR a 11% rise in fps with oc of 13% from 1010 to 1150 on a 390.
I never got my 290x to clock by much, only some 10%.
Senior Member
Posts: 7412
Joined: 2006-09-24
Good comparison that one. Reason why I like g3d reviews most is that Hilbert disables settings that are favorable to one or the other. Like tressfx in first TR or nvidia hairworks in witcher 3. It gives a better picture how the gpu might work all in all imo.
And well would have nice to see what model of 290x was used in that test since the 970 was a galax one with 1165 clocks and 1320 boost out of the box vs stock 970 which boosts to 1170 from 1050. edit: I found which model of 290x it was. So it was the basic 1000mhz one.