GeForce RTX 2070 in November with 2304 shader cores

Published by

Click here to post a comment for GeForce RTX 2070 in November with 2304 shader cores on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
It would seem to me Nvidia would kill the chances of studios really bothering to put the HW ray tracing to much use if they removed the support from the 2070. Aren't that and the 2060 going to be the most popular GPUs among gamers at large, unlike the more expensive and more high-end 2080 and 2080 Ti? I don't know how easy or difficult the ray tracing is to implement in the engine, but considering games are typically released with glaring bugs, it's quite obvious few studios want to spend a single second of extra time on software. Would the game makers go through the trouble if only a small percentage of players could benefit from it, yet all game buyers would still need to pay for the feature, regardless of if they can actually use it.
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Kaarme:

I don't know how easy or difficult the ray tracing is to implement in the engine, but considering games are typically released with glaring bugs, it's quite obvious few studios want to spend a single second of extra time on software.
It will be part of Nvidia's GameWorks which is designed to be easily implemented. In games, you will have RT technology toggle on/off in the same way you can turn on/off PhysX. Will we see widespread implementation of it? Potentially yes as it's the next logical advancement of PC graphics, however, in what form - that will be the main question here. I'm sure AMD will catch up, personally what I don't want to see is stupid "manufacturer only" type of Technology where you will have - RayTracing for Nvidia and some form of BobTracing for AMD, resulting in incompatibilities and one-sides titles.
https://forums.guru3d.com/data/avatars/m/252/252334.jpg
knowing nvidia, most likely 2070 Ti will follow, and perhaps 2060 Ti wil see the lights this time around.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Lucifer:

knowing nvidia, most likely 2070 Ti will follow, and perhaps 2060 Ti wil see the lights this time around.
Wasn't the 1070 Ti only released because the Vega 56 outperformed the 1070? Considering AMD hasn't apparently got anything to release in the foreseeable future, Nvidia wouldn't necessarily see the need to complicate their GPU portfolio. They might, of course, do it a year from now just to have something fresh to offer to the market.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
cryohellinc:

I'm sure AMD will catch up, personally what I don't want to see is stupid "manufacturer only" type of Technology where you will have - RayTracing for Nvidia and some form of BobTracing for AMD, resulting in incompatibilities and one-sides titles.
The raytracing is implemented via Microsoft's DXR - RTX accelerates DXR. AMD will have its own acceleration method.
https://forums.guru3d.com/data/avatars/m/261/261501.jpg
It would seem to me Nvidia would kill the chances of studios really bothering to put the HW ray tracing to much use if they removed the support from the 2070
I doubt it will matter for the first generation of RTX cards, as even these 2080 & 2080Ti's will not be powerful enough to do real time 'in game' ray tracing except for very limited amounts on small surfaces. By the time GPU's are fast enough to do real time ray tracing much more widely in game, the technology will have gotten cheaper and Nvidia will probably expand the features to their lesser cards.
data/avatar/default/avatar18.webp
I am usually pretty positive on nvidia but doesn't this info seem a little lackluster? 2070 with 2304 shaders vs the 1070 with 1920? That's only 16% after 2.5 years. The higher memory bandwidth will help some but won't provide major gains with the same 8GB. This jump seems a lot more 1070>1170 (haha) than 1070>2070.
data/avatar/default/avatar18.webp
SSD_PRO:

I am usually pretty positive on nvidia but doesn't this info seem a little lackluster? 2070 with 2304 shaders vs the 1070 with 1920? That's only 16% after 2.5 years. The higher memory bandwidth will help some but won't provide major gains with the same 8GB. This jump seems a lot more 1070>1170 (haha) than 1070>2070.
Node reduction is not drastic, also the die is bigger and there's not much margin to increase MHZ without exceding 250w. Also, Intel is doing it for years and people still buy their CPU's. 🙂
https://forums.guru3d.com/data/avatars/m/232/232504.jpg
Anyone knows if the RTX 2070 and especially RTX 1060 will have lower power consumption that GTX 2070 and GTX 1060 respectively?
https://forums.guru3d.com/data/avatars/m/263/263271.jpg
Not even sure if that could be considered an upgrade from my 1070...
data/avatar/default/avatar31.webp
If you consider a 1080 an upgrade, maybe. Looks to be right around the lvl of a 1080... maybe a bit more.
data/avatar/default/avatar13.webp
NiColaoS:

Anyone knows if the RTX 2070 and especially RTX 1060 will have lower power consumption that GTX 2070 and GTX 1060 respectively?
They won't. We're fked when it comes to power consumption because all perf/Watt savings will go toward higher perf. Luckily Nvidia knows their job, and since they are at the good place atm, instead of MOAR CORES approach which is bound to run into the power and scaling wall, they chose this release to revolutionize their entire GPU rendering strategy.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Kaarme:

Wasn't the 1070 Ti only released because the Vega 56 outperformed the 1070? Considering AMD hasn't apparently got anything to release in the foreseeable future, Nvidia wouldn't necessarily see the need to complicate their GPU portfolio. They might, of course, do it a year from now just to have something fresh to offer to the market.
yes, the 1070ti was a stopgap, but one that became popular despite cannibalizing the 1080 sales. it is extremely doubtful they will put anything between the 2070 and the 2080, as they started w/ the 2080ti at launch
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Noisiv:

They won't. We're fked when it comes to power consumption because all perf/Watt savings will go toward higher perf. Luckily Nvidia knows their job, and since they are at the good place atm, instead of MOAR CORES approach which is bound to run into the power and scaling wall, they chose this release to revolutionize their entire GPU rendering strategy.
lolz if you think Nvidia doesn't have "moar cores" in your future. there is no problem with scaling and power is reduced by the process. Nvidia has been working on this since AMD's original design with Vega... which is fully scalable. and don't get all fanboy on the last statement, it is fact.
data/avatar/default/avatar17.webp
tunejunky:

lolz if you think Nvidia doesn't have "moar cores" in your future. there is no problem with scaling and power is reduced by the process. Nvidia has been working on this since AMD's original design with Vega... which is fully scalable. and don't get all fanboy on the last statement, it is fact.
Actually I know for a fact that Nvidia already has less of traditional shader cores on their RTX lineup than it could have. And that is because they chose to add Ray-tracing specific ASIC. Which eats into both power and area. Although it it's an order of magnitude more efficient in RT-specific collision calculations than the general purpose Cuda Core. So yeah, forward looking, IQ bettering, RT specialized architecture combining both GP shader + tensor cores, instead of brute forcing MOAR CORES. At the time when they can afford to dabble in new techniques. I call that smart. And is exactly the opposite of what AMD did with Vega, investing in future techniques that either do not work, or are not supported, at the time when they had been lagging to begin with.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Noisiv:

Actually I know for a fact that Nvidia already has less of traditional shader cores on their RTX lineup than it could have. And that is because they chose to add Ray-tracing specific ASIC. Which eats into both power and area. Although it it's an order of magnitude more efficient in RT-specific collision calculations than general the general purpose Cuda Core. So yeah, forward looking, IQ bettering, RT specialized architecture combining both GP shader + tensor cores, instead of brute forcing MOAR CORES. At the time when they can afford to dabble in new techniques. I call that smart. And is exactly the opposite of what AMD did with Vega, investing in future techniques that either do not work, or are not supported, at the time when they had been lagging to begin with.
all of which is true, but misses my point entirely. the name of the game is Yield. simply because that is directly related to profit, cost, and ability to sell. larger chips have lower yields - that is just the way manufacturing works as the silicon wafers are all the same size to start with. and it's why jumping to a smaller process always increases yield. Nvidia is "tick-tocking" atm, RT (2080,2080ti turing at least) is a new architecture on the most refined process (but not the smallest). and they're jumping the gun a little bit to quiet the waters and have their next gen available (in the market) by the time AMD releases Navi/whatever the hell they want to call it. and btw... Nvidia IS going for scalable modules, they're just far behind in that area.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
tunejunky:

and btw... Nvidia IS going for scalable modules, they're just far behind in that area.
How are they far behind?
data/avatar/default/avatar09.webp
@tunejunky The name of the game is not Yield. At all. Case in point Volta's V100 - a GPU which is barely producible, yet has been raking in profits and lifting the company value ever since its creation. if the name of the game was Yield, AMD would be AT THE VERY LEAST equal to Nvidia. The name of the game is Profit = Addressing as wide as possible market, with as competitive as possible products, with as high as possible margins. And margins are only partially influenced by yield, because the BoM is only a tiny fraction of Nvidia's spending, being dwarfed by R&D and salaries.
data/avatar/default/avatar23.webp
tunejunky:

yes, the 1070ti was a stopgap, but one that became popular despite cannibalizing the 1080 sales. it is extremely doubtful they will put anything between the 2070 and the 2080, as they started w/ the 2080ti at launch
If you think about it, they didn't actually change much. They couldn't launch a new Titan because the current Titan V would be more powerful and expensive still. So they launched the Ti at the OG Titan price point at launch. We will probably see a 2090 or some-such launch in the slot where the Ti currently sits. I'll be very interested in the performance of these cards. Various reviewers were saying that the 2080 Ti was having issues hitting smooth frame rates at 1080p in the hands-on demos at the event, with noticeable dropped frames. Likely things will improve because of drivers, but it's concerning. I'm also disappointed that they are relying on studios to implement Tensor assisted anti-aliasing (the feature that's being implemented on the vast majority of "RTX Ready" titles) rather than putting it into the drivers. It seems like that could be low hanging fruit that could have given a universal boost to performance and image quality, but apparently it's not as easy to do as it seems.