NVIDIA Tesla A100 with GA100 Ampere GPU based on 7nm, 54b transistors and 6912 CUDA Cores

Published by

Click here to post a comment for NVIDIA Tesla A100 with GA100 Ampere GPU based on 7nm, 54b transistors and 6912 CUDA Cores on our message forum
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
What a fucking beast this is for servers. I understand business can pay but it's sad they do this BS for consumers. The bigger problem is die size: at 826 mm2 it's even bigger than the last generation. Adding the extra cost of the expensive 7nm node, this will without a doubt be more expensive. We need AMD to, again, come up with a way to disrupt the market. We need multi silicon tech like Ryzen to come for GPUs. Having 3x300mm would be more affordable and beat this monolithic approach in performance.
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
wow, the maximum reticule limit is 858mm2 and Nvidia is pushing 826mm2 on a brand new process node, that's some serious balls they have. Let hope Titan Ampere and 3080 Ti won't cost twice as much as their Turing counterpart...
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Silva:

The bigger problem is die size: at 826 mm2 it's even bigger than the last generation. Adding the extra cost of the expensive 7nm node, this will without a doubt be more expensive.
Krizby:

wow, the maximum reticule limit is 858mm2 and Nvidia is pushing 826mm2 on a brand new process node, that's some serious balls they have. Let hope Titan Ampere and 3080 Ti won't cost twice as much as their Turing counterpart...
The size includes HBM2, same as GV100 @ 815 mm2 Actual Dies are under 800mm2
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
I can't wait to see a new GPU line crushing all the existing GPUs. My upgrade path is: Can't remember > Dual GTX 6600GT > ATI 4870 (300 USD) > GTX 560 ti (MSRP: 250 USD) > GTX 970 (MSRP: 330 USD) > GTX 1080 (MSRP: 600 USD) The GTX 1080 is the only one I bought "used" (I think for 500€ when the GTX 1080 ti was released), because it started to be in a price range I don't like to play.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
So if the Titan is 6912, cores hopefully the 3080ti will be around 6500~ cores, Just hoping it's under the £1000 mark this time around. Gotta say based on the table, this is clocked fairly low, maybe due to the insane core number, wonder if it has a lot of OC room.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Ricepudding:

So if the Titan is 6912, cores hopefully the 3080ti will be around 6500~ cores, Just hoping it's under the £1000 mark this time around. Gotta say based on the table, this is clocked fairly low, maybe due to the insane core number, wonder if it has a lot of OC room.
The titan will not be 6912 cores, GA102 is projected to be 5376 and thats only the fully enabled package, the 3080ti will be 5120 at best.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Astyanax:

The titan will not be 6912 cores, GA102 is projected to be 5376 and thats only the fully enabled package, the 3080ti will be 5120 at best.
You're right, Telsa is normally more than Titan cards, that's my bad, Though I cannot see it being 1800 core difference, having a look at Turning it's 4608 vs 5120 cores (which is 512 core difference or 768 cores vs the 2080ti). Just seems like far too much of a gap right? unless they want a 3080ti super year down the line? and a titan super? Unsure where you got 5120 from, everyone I am looking shows the 3080ti at 5376 cores, which still baffles me about the huge gap in between unless the Titan is going to have a far bigger difference in core count this time around maybe to allow for that 80TI super this time around?
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Ricepudding:

Unsure where you got 5120 from, everyone I am looking shows the 3080ti at 5376 cores
Ti's are almost never fully enabled on a new process Thats why i'm going with the Ti having 4sm's fused off and the Titan/Quadro 102 having the full 84. GP102 - 3840 1080ti - 3584 Titan P - 3840 TU102 - 4608 2080ti - 4352 RTX Titan - 4608 And just to point out, even nvidia are fusing parts of the A100, so defect levels must significant. 6912 out of 8120.
data/avatar/default/avatar40.webp
D1stRU3T0R:

ahm no? In MOST cases it's user error. Bad XMP, bad PSU, using PCIe4, Anti Lag enabled and etc. Yea, it should work even with these, but ffs, people are so dumb nowadays, can't troubleshoot literally anything. IT SHOULD BE plug/play, but most just buy whatever random XMP with 400W PSU and then cry that x and y is crashing. Idk, was the DX9 downclock fixed? AMD should stop with this useless power saving features ffs
LOL you blame "user error" then list a bunch if things that aren't user related. Typical cognitive dissonance seen from AMD fan boys.
https://forums.guru3d.com/data/avatars/m/145/145154.jpg
fry178:

@Rx4speed/Legacy-ZA and? do you also post "oh i won't buy a Taycan cause its at least double of any other e-car..."?
Porsche at least has the good sense to name it an entirely different model if it's skipping 2 or 3 price brackets in a single generation.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
If their Tesla-class enterprise chip is so severely cut down ( 108 out of 128, or barely 84%, I can't imagine how much lower a theoretical 3080 Ti could be with this defect rate...) Yields on 7nm must be terrible that they have to fuse so many SMs, and completely drop a stack of HBM2 ... (5 out of 6, one in the picture being just mechanical support) --- It's still an impressive chip, but damn, not a good start for yields ! Perhaps by the time they release the gaming ones they can produce them with less defects, so the professional line is upgraded to ... let's say, 116/128, and these 108 SM models become the actual 3080 Ti chips. In any case, a non-Ti 3080 could be 4/6 of the full-fat Ampere, so 84 or 86 SMs as the full chip, or 80 when slightly cut down, that's still a massive 4800 cuda core monster, way better than current Turing 2080 which has only 3072 cuda cores. If everything being equal (which most certainly is not), 3080 can still be 56% faster than 2080 !! That puts it way above 2080 Ti... I truly expect that 3070 will be faster than 2080 Ti too, even if just barely... What a monster !!
https://forums.guru3d.com/data/avatars/m/180/180832.jpg
Moderator
Meanwhile.......
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Krizby:

wow, the maximum reticule limit is 858mm2 and Nvidia is pushing 826mm2 on a brand new process node, that's some serious balls they have. Let hope Titan Ampere and 3080 Ti won't cost twice as much as their Turing counterpart...
It´s not a question of balls. Nvidia knows that some customers are desperate for more performance and they don´t care about how much the hardware can cost, so Nvidia is catering for them with this kind of products. It´s just a smart business decision, nothing else.
data/avatar/default/avatar28.webp
The TSMC 7nm node has been getting a lot work from Apple and AMD in particular, especially with the upcoming consoles. It's likely matured enough to a decent yield rate.
data/avatar/default/avatar11.webp
"Crysis Remastered is my biyatch." - Lord Ampere
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Rx4speed:

Whatever card uses that chip will be $3000. I'm still using my 1080ti on a 1440p 165hz g-sync montior. I'll sell it and but a good 1080 before I pay 3 grand on a video card. I was a regular buyer, at release, of every Ti released, FOREVER. I skipped the 2000 series, and I'll move to console before I pay even$1500 for a card. What was the best 1080TI, maybe the MSI trio X, and it was $800-850 new?. They double the price of the 2080ti with the worst generation to generation perf gain ever. Now I bet you they want even more. I already dumped intel for this greedy, lazy behavior. nVidia you are next. I've not see this price craziness ever, and I've owned PC's back to the TRS-80 and Apple II Plus.
The predecessor DGX 2 box that is on the market today is $399,000. This chip is intended to go in the DGX 2's replacement DGX A100 and 8 of these chips will make up the DGXA100.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Astyanax:

The size includes HBM2, same as GV100 @ 815 mm2 Actual Dies are under 800mm2
Agree. Im not surprised knowing Nvidia will charge about $400,000 for the DGX A100 which will have 8 of these chips in it. Nvidia has plenty of margin for this beast.
data/avatar/default/avatar21.webp
angelgraves13:

I was hoping we'd see something of DLSS 3.0 or new gaming tech, but looks like that's being kept top secret for now.
That was never expected, this is a HPC conference.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Gonokhakus:

Well I'm not 100% sure, but I think the DGX-II had 2 Xeons
Based on info elsewhere it seems like Nvidia decided to go with two 64-core EPYCs. Not bad, and certainly the right choice.
data/avatar/default/avatar24.webp
Mundosold:

LOL you blame "user error" then list a bunch if things that aren't user related. Typical cognitive dissonance seen from AMD fan boys.
What do you mean? Mostly the problems were seen on amd platform, with xmp enabled (and you know, ryzen isn't ram friendly) Psu isn't user fault? K AMD fan boy? Lmao. Yea sure, that's why I even refuse to buy Ryzen till this day