Nvidia Ampere GA100-GPU would get 8192 cores and boost speed up to 2200 MHz
As GDC closes in, most people do expect some NVIDIA Ampere announcements during the keynote, time will tell. Meanwhile, some (and we assume it is Ampere) specs and benchmarks already have leaked from HPC parts. New info was spotted on the web though.
With the some benchmark entries spotted I already explained that most GPU work in a multitude's of that eight (bits/bytes), and that I would not be surprised to see 128 Shader clusters (CUs) and thus a possible 8192 shader cores for a fully enabled GPU. While it is nothing more than a user posting some info on a forum, new speculation is drawing some attention. A Chinese forum called Stage1 apparently has shared some reliable info in the past, and this round he talks about a chip called 'ga100', And GA obviously would be GeForce Ampere. Here's what he posted and galls GA100:
- 128SM, 8192cuda, 24 / 48GB HBM2e, boost frequency up to 2.2Ghz, double the tensor core, 300W TDP
GA100 would get far more compute performance then expected, the 7nm fabbed GA100 would indeed get 8192 cores and thus 128 CUs, meaning NVIDIA if pretty much going monolithic and doubles up on their transistor budget, which going from 12nma and 14nm to 7nm is very possible. Very interesting is the mention of a boost clock up-to 2200 MHz, which seems high especially with a 300W TDP.
All that in combination with 24, 32 or even 48 GB of HBM2e graphics memory would/could deliver 32 teraflops of performance. Obviously we're talking data center and supercomputer centric products here, but Ampere paves the way architecture wise for the consumer products as well. Apparently the number of tensor cores would double up as well.
Relativity can be a bitch - of course, remember, this info is based upon one post in a forum. Grab some salt, but with the number of leaks and GDC so close, I can't rule out this info to be false.
Rumor: NVIDIA Ampere GeForce RTX 3070 and RTX 3080 specs surface - 01/20/2020 01:04 PM
It has been a crazy year, lots of leaks and info mostly coming through unknown Twitter accounts. Today we can add to that as alleged specifications of the GeForce RTX 3070 and RTX 3080 have surfaced....
NVIDIA Announces New 360Hz refresh rate G-SYNC Esports Displays - 01/06/2020 12:04 PM
NVIDIA today unveiled new G-SYNC® displays with a 360Hz refresh rate, providing esports enthusiasts and competitive gamers with the fastest gaming displays ever made. At 360Hz, game frames are displa...
Next Generation NVIDIA Ampere reportedly to offer 50% more perf at half the power - 01/03/2020 11:59 AM
An interesting set of quotes have been made in the Taipei Times. This year we expect the NVIDIA Ampere series GPUs to land, and with increased competition from AMD and maybe Intel, they might be beefi...
NVIDIA and Tech Leaders Team to Build GPU-Accelerated Arm Servers for New HPC Architectures - 11/19/2019 06:42 PM
NVIDIA today introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm®-based servers, driving a new era of high performance computing for a growing range of ...
NVIDIA Announces Jetson Xavier NX, Smallest Computer for AI at the Edge - 11/07/2019 09:08 AM
NVIDIA today introduced Jetson Xavier NX, the world's smallest, most powerful AI supercomputer for robotic and embedded computing devices at the edge. With a compact form factor smaller than the size...
Senior Member
Posts: 15007
Joined: 2018-03-21
Is that "market ignorant" a new word you like to toss around like your placebos?
lol, fail. xD
The failure is you.
https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5.html
https://www.reddit.com/r/nvidia/comments/99r2x3/attempting-to-work-out-rtx-2080ti-die-cost-long/
And then this doesn't even have costings for pcb complexity,
I think you're making valid points but $115 is an immense difference for the die alone. The costs also don't end there because you need to factor in the cost of validating the die and ensuring that it runs error free at its rated clock speed. All those faulty dies still need to be checked and people are getting paid to do so.
Then you need to consider that Ti card uses a 352-bit PCB vs a 256-bit PCB on the 1080. Those are more expensive while you also need 11 memory chips instead of 8 to run on that kind of bus. A single 1GB GDDR6 chip cost ~25$ the last time I checked so the final tally is now possibly a $250 difference on just these three components.
It all adds up and we're still only looking at the bill-of-materials side of it. Factor in R&D and the cost of software engineers and those costs quickly start rising. Only, this time, we're not just talking about rasterization but ray-tracing and AI as well.
At the risk of seeming insensitive, if you cannot afford a $700 GPU then you can also not afford a $600 GPU. My own utilities bill includes $110 to my ISP and Netflix every single month which means that I'll be spending $2640 over the next two of years on just these two things. The RTX 2080 should easily last me the same amount of time and costing $700 and not $600 is completely meaningless to me. Someone complaining about a $100 increase just tells me that he isn't the person paying for utilities in his house and he shouldn't be spending $600 on a GPU in the first place.
nuff said

Turing is priced exactly where it should be for the people who were passing over pascal as an upgrade.
The people who won't buy turing weren't going to buy turing anyway regardless of the price.
Senior Member
Posts: 17811
Joined: 2012-05-18
So ram was more expensive and now it made gpu 50% more expensive.. right..nice trolling.
Amd hbm is more and in end it got slashed down to initial prices before mining craze and all..
That said the only price hike on nv side was yes they taking advantage of mining market. Greed has no limits.. but it took a toll on them for sure.
I've seen some news a while ago high end gpus wont be as expensive as now, just lower end like xx60 and xx70 will somewhat stay the same.. top tier will have price cuts.
Senior Member
Posts: 15007
Joined: 2018-03-21
You were given 2 links, and a third component to consider and you only picked on the one who's cost directly scales on amount used.
https://www.reddit.com/r/nvidia/comments/9o6256/2080ti-allocation-problems/
a good part of the freaking cost is the component shortage, another part is price per wafer to buy tmsc fab time.
If Ampere is cheaper i'll be surprised since euv takes longer to fab and backdrilling is time and money.
Senior Member
Posts: 15007
Joined: 2018-03-21

102 = 3080ti, RTX Titan A, Quadro RTX A8000)
103 = 3080 Full/3070
104 = 3060ti Full/3060
106 = 3050ti Full/3050 160bit interface?
107 = 3030(no nvenc?)
Senior Member
Posts: 11452
Joined: 2004-05-10
Even if it's "only" $1000 MSRP, it's absolutely hilarious that people here would think that's a reasonable or fair price, by any stretch of the imagination. In b4 someone comes in talking about how HBM2 costs more than an ocean of virgin blood, that their R&D costs more than 10 billion pure souls, then links to nVidia's ballsack's reddit page stating they pay everyone in their staff and homeless people outside $10K per minute therefore they need to charge insulting amounts.
I'll drop to 30 fps gaming or buy a console before I pay nVidia's mafia monopoly prices. Or better yet, bust out my backlog of old games. Too bad, I was looking forward to Cyberpunk on PC (no, I won't play it on a console). Guess I'll play that in 2026 or so. I can wait.
Yeah, $1000 GPUs are a horrible thing if it holds into next gen, but I doubt the massive dies used in them allows them much room to come down in price to Pascal levels.