Nvidia GeForce X80 and X80 Ti Pascal Specs?

Published by

Click here to post a comment for Nvidia GeForce X80 and X80 Ti Pascal Specs? on our message forum
https://forums.guru3d.com/data/avatars/m/108/108420.jpg
Bull****! Even the name, X80, is totally un-nvidia. That's just not happening.
https://forums.guru3d.com/data/avatars/m/240/240541.jpg
GP100 supports HBM 2 as well as GDDR5 ??? Highly Unlikely
HBM would allow for something like this without having to make 2 different chips.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
HBM would allow for something like this without having to make 2 different chips.
The memory controller for GDDR5 takes up a massive amount of die space. HBM's doesn't and you would need to have both to support this configuration. It's wasted transistors on both chips(added cost), it's higher leakage on both chips, it's lower yields on both chips, and/or potentially lost performance on both chips. I don't think it makes sense to have both
https://forums.guru3d.com/data/avatars/m/103/103120.jpg
OMG 225 watts. It will be like a toaster oven.
TDP is lower, yet twice more processing power - 6 TFLOPS vs 12. And again. It's just TDP, not actual energy consumption. Let's wait serial tests. So far AMD have much larger actual energy consumption.
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
Obvious its fake.High TDP,GDDR5,512 bits with GDDR5 on Nvidia card??? Pure joke. Or its just a PR joke to make gamers let them simmer in its own juice. 😀 Just wait the official statement.
https://forums.guru3d.com/data/avatars/m/203/203083.jpg
You can't have the same chip GP100 with two different memory controlers, way too expensive.
data/avatar/default/avatar09.webp
Sign me up for a X80, as long as the price is similar to 970 around $300.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
In what universe is a 512 bit memory interface, clocked at 8000MHZ (effective i can only assume), capable of transfering 512GB data per second. 512 * 4000 (actual clock speed) * 2 (Double Data Rate) = 409.6GB/s transfer speed. To reach 512GB/s on a 512 bit interface, you would need 5GHZ clock speeds (10GHZ effective) As for shader cores and single precision. Assuming that the Pascal chip is of equal size, it seems plausible, considering the reduction from 28nm to 16nm = 57.1% the size. Comparing the 900 series to the 700 series, more specifically the ''Titan black'' which is the most powerful GK110 chip. Compared to the Titan X, which is the most powerful GM200 chip. Titan X 7.1% larger chip 12.9% more transistors 6.6% more Shader cores (nvidia dubbed them ''cuda cores'' 25% fewer Texture Mapping Units 100% more Render output units The 900 series is in practice, a combination of 1. larger chip. 2. improved design. 3. reduction in some areas. and 4. increase in other areas. The production technique, the work done at TSMC is still based on the same 28nm technology, though the yield and quality may have contributed slightly. Comparing the 900 series Titan X and the ''Pascal Titan'' Shader cores = 200% Texture units = 200% Texture mapping units / Raster operators = 200% Transistor count = 204.9% I call bull****. While the speccs may come from Nvidia, i have extreme doubts that these numbers are anything but loosely written down approximations meant for the designers and technicians. Why? I haven't the slightest idea, but i'm guessing it's tough to predict the exact specifications months, or even weeks in advance of large scale production. Doubtless, the guys making the chips, TSMC, cannot guarantee yield quality, thus Nvidia has to work with approximations.
Actually going from 28nm down to 16nm is a huge decrease in size, even more than the 57% of the size you talked about. It's because you have to think about silicon chips nm in terms of area (they're 2D structures effectively), so nm-squared. This is the calculation showing theoretically how small 16nm is compared to 28nm: (16x16)/(28*28) = 0.33 Therefore 16nm transistors only take up 33% of the space of their 28nm brothers. (Another way of saying it is that 28nm is 3 times the size (100/33) of 16nm). They skipped a node, that's why they're so much smaller, they skipped the 20nm node. Anyway, I'm not sure I believe this table showing the X80, etc, as me & some others were speculating a couple of days ago with names for the next Pascal architecture, and you can see from Post #1516 on the following page (http://forum.notebookreview.com/threads/pascal-what-do-we-know-discussion-latest-news-updates-1000m-series-gpus.763032/page-152) that we came up with that naming scheme. I reckon someone nicked that idea & just fabbed a spreadsheet.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
GDDR5 would make sense on an NVIDIA product releasing this year. There is no way that GDDR5X production would ramp up fast enough to cover millions of cards in sales. The speed and width of the GDDR5 make sense too. NVIDIA has had a very effective memory controller on Maxwell, if they translate that to a card with actual 400-500GB/sec bandwidth, they will be fine. By the way, "X" is the latin numeral for "10". So the naming scheme does make a lot of sense. These might be fake, but they do make sense.
https://forums.guru3d.com/data/avatars/m/99/99928.jpg
GDDR5 would make sense on an NVIDIA product releasing this year. There is no way that GDDR5X production would ramp up fast enough to cover millions of cards in sales. The speed and width of the GDDR5 make sense too. NVIDIA has had a very effective memory controller on Maxwell, if they translate that to a card with actual 400-500GB/sec bandwidth, they will be fine. By the way, "X" is the latin numeral for "10". So the naming scheme does make a lot of sense. These might be fake, but they do make sense.
Is not latin, its roman number.
https://forums.guru3d.com/data/avatars/m/73/73680.jpg
In what universe is a 512 bit memory interface, clocked at 8000MHZ (effective i can only assume), capable of transfering 512GB data per second. 512 * 4000 (actual clock speed) * 2 (Double Data Rate) = 409.6GB/s transfer speed. To reach 512GB/s on a 512 bit interface, you would need 5GHZ clock speeds (10GHZ effective)
Well your math is all wrong for one. You didn't covert bits to bytes (divide by 8). Then convert megabytes to gigabytes. Lets take the 980Ti for example: 384bit bus 3505Mhz Memory Clock 384 * (3505 * 2) / 8 / 1000 = 336.48GB/s So it here is your example: 512 * (4000 * 2) / 8 / 1000 = 512GB/s
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
if anyone releases a consumer graphics card with a 600mm^2 die size on 16/14nm finfet this year I'll eat my hat. Also, remind me to buy a hat
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
not buying it, i don't see why nvidia will use gddr5 instead of gddr5x. Also if the titan specs are true, then you can expect a 3k$ price.
Is not latin, its roman number.
Who were the Romans, oh wait, the Latins. :infinity: Roman numerals use letters from the Latin alphabet, and they are alternative known as latin numerals. If the Titan specs are true, it means it will be released in 2017.
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
Who were the Romans, oh wait, the Latins. :infinity: Roman numerals use letters from the Latin alphabet, and they are alternative known as latin numerals. If the Titan specs are true, it means it will be released in 2017.
The romans spoke latin, they were not the latins
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
The romans spoke latin, they were not the latins
This is neither the time nor the place 😀
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
if anyone releases a consumer graphics card with a 600mm^2 die size on 16/14nm finfet this year I'll eat my hat. Also, remind me to buy one of those
Fixed that for you my friend 😀
data/avatar/default/avatar08.webp
Actually going from 28nm down to 16nm is a huge decrease in size, even more than the 57% of the size you talked about. It's because you have to think about silicon chips nm in terms of area (they're 2D structures effectively), so nm-squared. This is the calculation showing theoretically how small 16nm is compared to 28nm: (16x16)/(28*28) = 0.33 Therefore 16nm transistors only take up 33% of the space of their 28nm brothers. (Another way of saying it is that 28nm is 3 times the size (100/33) of 16nm). They skipped a node, that's why they're so much smaller, they skipped the 20nm node. Anyway, I'm not sure I believe this table showing the X80, etc, as me & some others were speculating a couple of days ago with names for the next Pascal architecture, and you can see from Post #1516 on the following page (http://forum.notebookreview.com/threads/pascal-what-do-we-know-discussion-latest-news-updates-1000m-series-gpus.763032/page-152) that we came up with that naming scheme. I reckon someone nicked that idea & just fabbed a spreadsheet.
Also the 16nm is done on Finfet design compared to planar on 28nm, which has added benefits of better performance and lower power draw in direct comparison transistor for transistor. Lot of users reckon next generation of cards can't be that much better than 900 series, many will be surprised. Early test results from Samsung comparing 28nm planar to 16/14nm Finfet.
One of the earliest manufacturing providers at the 14nm process node, Samsung has been developing FinFET process technology for several years and is now ready for early adopter production. Samsung's 14nm LPE process is providing almost 150% better performance from a die half the size of the previous node and improving power consumption by around 150% when compared to its 28nm process technology - See more at: http://www.newelectronics.co.uk/electronics-technology/what-makes-finfets-so-compelling/56795/#sthash.FaHNt2k7.dpuf
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Also the 16nm is done on Finfet design compared to planar on 28nm, which has added benefits of better performance and lower power draw in direct comparison transistor for transistor.
Well it's looking better & better for Pascal!
data/avatar/default/avatar40.webp
Well it's looking better & better for Pascal!
Read my updated post, direct comparison shows 150% better performance and 150% better power consumption when compared 28nm transistors to 16/14nm Finfet transistors :banana: