NVIDIA GeForce RTX 4000 Ada Lovelace series design reportedly completed (5nm)
Click here to post a comment for NVIDIA GeForce RTX 4000 Ada Lovelace series design reportedly completed (5nm) on our message forum
I_Eat_You_Alive
If Nvidia is smart (would be a first in over a decade) they will hard code (talking non-writable ROM) a firmware/bios lockout for bitcoin and all of the other virtual currency mining from the 4000 series gaming cards. Put them on the market first. Then introduce a dedicated 4000 series mining card at 2x the msrp of the gaming card 6 months later. Nvidia screwed the pooch with the 3000 series cards. They are going to shelf warm for a long time till they become obsolete and get clearance pricing for a fraction of what they are getting now for them. AMD and Intel have a double wide door to walk through and take a serious ass whoopen to Nvidia.
DannyD
Yes nvidia should make cards for miners and charge twice the price, let the miners lap up these amazing cards while us with a brain buy up all the 20 series no-one wants.
insp1re2600
Reddoguk
Mates this hobby of ours is very fickle. I only just got a 3090 and now i'm told it will be spanked in only a years time by a beast with nearly double the Unified Shaders from 10496 to over 18000 and from 82 SMs to 144 SMs. Makes you kinda angry knowing that all that cash you spent and in a year will be beaten probably by the 4080. š š š
DannyD
Reddoguk
beedoo
DannyD
6900xt no good then?
insp1re2600
Only worth getting top end on day one to maximise use and break down of cost. Iād have never bought a 3090 this late in game.
beedoo
JJJohan1
I bought a 3080 to replace my 970 in November last year, I'm still only halfway through the order queue now. I think I'll wait it out so I can get my card at the original retail price, not overly optimistic about 4080 availability.
Andrew LB
DannyD
Reddoguk
Krizby
tunejunky
Denial
tunejunky
this is actually a very hard place for Nvidia to be in, despite profits and appearances.
Nvidia has lead in uArch for decades, they have lovingly embraced every square nanometer of die space and have pushed the envelope for circuit density.
but MCM changes everything and does it categorically.
AMD has had a long term view of competition since Dr. Su has taken the reins. they have taken long term strategic partnerships that are bearing a bumper crop of fruit.
whatever you call the process node, AMD will have an advantage for the next ten years.
if you are a smaller company (like AMD vs. Nvidia) you have to play to your strengths. AMD's main strength was fabrication, but unloading Global Foundries was necessary to provide the R&D funding that Dr. Su knew was necessary on both sides of their portfolio. so they sold the fab, but kept the brains.
and those brains knew TSMC wanted a taste of x86, so together they went and created Ryzen and more importantly Threadripper as a way to introduce new technologies to the stagnant PC market as Intel was resting on their laurels and nibbling bon-bons.
Threadripper used an entirely new and revolutionary technology called Infinity Fabric to sew together (or as Intel would say, glue) chiplets to create unheard of performance in the PC marketplace.
not being complacent, AMD shook up the marketplace and became Wall St.'s darling, raising even more R&D dollars to reinvest in this new technology but GPU's have entirely different internal requirements than CPU's and sensitivity to latency was on an entirely different level... thought to be impossible...
but no, difficult, but not impossible which is why Nvidia's in a spot until their technology (and plans for ARM) ripen.
and that spot is created by the fabrication experience of AMD and Intel. even though Intel's GPU is mid-market (at best), they have manufacturing ability to make MCMs.
even if RDNA 3 as an uArch isn't as good as the Nvidia uArch, it can drastically outperform at any level by scale , which is drastically less expensive because you are making LOTS of very high yield parts which multiply cost savings at every level.
indeed, if marketings ugly head is nowhere to be seen, you can create a 3090 killer for under $800 with most of that coming from DDR and VRMS. but best of all for enthusiasts you can truly and comfortably choose the same build quality plus or minus the MCM count (or disabled cores or both) and even allow a Very Large Socket (2x- 4x) for datacenter.
and i got to tell you Enterprise loves the idea of more grunt in less space, especially as it will replace many other components with one "card". so basically better than NVLink could ever hope to perform at lower cost and heat (i.e.1 card replacing 3-4).
all a single MCM has to do is perform at today's entry level (tho' it will be better than that) and the Economies of Scale upset the marketplace.
and it couldn't happen fast enough.
tunejunky
Denial
https://research.nvidia.com/sites/default/files/pubs/2019-08_A-0.11-pJ/Op%2C//HotChips_RC18_final.pdf
Either way I agree that MCM is 100% future in both, I just think gaming might be monolithic for another generation. I'm not entirely convinced AMD is doing MCM for gaming next round either. I personally think the MCM leaks are about their next gen CDNA chip.
Idk, maybe - the research on that came out in 2017 but I think the point of their papers would still hold true. Gaming GPU requirements are significantly different than scientific/AI workloads especially when it comes to scheduling - which is basically what they said would lead to issues.
More recently they did RC18 MCM for inferencing, while I'm sure they learn something from projects like this it's probably much different then doing a traditional GPU as MCM.