Cyberpunk 2077 Patch Update: Introducing RT Overdrive & Path Tracing Technology Preview

Published by

Click here to post a comment for Cyberpunk 2077 Patch Update: Introducing RT Overdrive & Path Tracing Technology Preview on our message forum
data/avatar/default/avatar03.webp
SharkyUK:

I think that now is the right time to push RT/PT as it will take some further generational changes and hardware evolution to "get there". Whether or not we choose to invest at this point is a different matter, but we are moving towards another way of rendering 3D worlds and I think that's really exciting. Right now, implementing RT/PT with current GPU tech (and for the foreseeable) will mean compromises as we are still very much in the age of rasterisation. This is no bad thing, but it does require an extraordinary amount of work, smoke and mirrors from developers in order to produce the sort of (non-RT) visuals we have become accustomed to over the years. As crazy as it may sound, a "purer" PT rendering pipeline is arguably much easier to implement, maintain, and scale. Or it will be when underlying hardware is a little more performant and less constrained by the issues faced when attempting to straddle the line between rasterised and RT/PT-generated visuals (or the marriage thereof). I've been pleasantly surprised by how far the technology has come in the last couple of generations and it's actually progressed further than I expected. Despite the negative feedback and derision from some corners of the gaming and technology world, I do think that this is quite an achievement and it only fuels my excitement for what's to come. I can't wait to see what we can come up with over the next few years.
At 1440p, all maxed out, a 3090 is great with performance DLSS. If you want 4k, you need 4090, period. AMD... AMD... hahahahahahahahahahahaha, sorry, but 7900 series is a sh*t right now.
data/avatar/default/avatar17.webp
Digital Foundry has an excellent video where they show the differences between Max regular settings, Psycho RT and the new Overdrive RT. [youtube=1K8Br6jHkcs] I'm sure that you'll agree that Overdrive definitely produces more realistic scenes.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
XP-200:

^^^550 watts!!!....... my oil heating radiator pulls that at half settings heating the room. Lol
After this realization, I hope you no longer use your radiator but game instead.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
southamptonfc:

A rare glimpse of the future and I like it
With ever increasing power demands of GPUs, and apparently little gains in power efficiency generation to generation, and more and more accent being put on faking frames via denoising, upscaling and interpolation... I'm beginning to wonder if we'll actually get to see that future before silicon manufacturing hits the peak. And no other material seems to be ready anytime soon. Carbon nanotube electronics seem like a pipe dream... It kinda' seems that the technology to produce something like the StarTrek holodeck, with real-time true realistic rendering of everything around is impossible in the physics of known universe. Once we hit 10-20 atom size circuitry, it's done, there's no more shrinking, it's impossible. And we are not that far away with the 3-4 NM in research now
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
wavetrex:

With ever increasing power demands of GPUs, and apparently little gains in power efficiency generation to generation, and more and more accent being put on faking frames via denoising, upscaling and interpolation... I'm beginning to wonder if we'll actually get to see that future before silicon manufacturing hits the peak. And no other material seems to be ready anytime soon. Carbon nanotube electronics seem like a pipe dream... It kinda' seems that the technology to produce something like the StarTrek holodeck, with real-time true realistic rendering of everything around is impossible in the physics of known universe. Once we hit 10-20 atom size circuitry, it's done, there's no more shrinking, it's impossible. And we are not that far away with the 3-4 NM in research now
Nah, there's tons of space. It's going to be difficult and expensive obviously but we can build transistors one atom thick and current transistors are like ~1000 atoms - then we'll start to stack them. Definitely going to be a lot of advancements that need to happen, X-ray lithography next, to get us there but I don't think we are anywhere near the peak.
https://forums.guru3d.com/data/avatars/m/267/267153.jpg
Why dont they solve the disgusting pop-in for effs sake tho?
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
wavetrex:

With ever increasing power demands of GPUs, and apparently little gains in power efficiency generation to generation, and more and more accent being put on faking frames via denoising, upscaling and interpolation... I'm beginning to wonder if we'll actually get to see that future before silicon manufacturing hits the peak. And no other material seems to be ready anytime soon. Carbon nanotube electronics seem like a pipe dream... It kinda' seems that the technology to produce something like the StarTrek holodeck, with real-time true realistic rendering of everything around is impossible in the physics of known universe. Once we hit 10-20 atom size circuitry, it's done, there's no more shrinking, it's impossible. And we are not that far away with the 3-4 NM in research now
SLI/Crossfire FTW???:D
data/avatar/default/avatar02.webp
H83:

SLI/Crossfire FTW???:D
New version: chiplet.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Does SLI/Crossfire/Chiplet even address the problem he's describing? If anything they just increase the size. They don't improve density.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Denial:

Does SLI/Crossfire/Chiplet even address the problem he's describing? If anything they just increase the size. They don't improve density.
It does, indirectly, because it increases the performance available and because after a certain point it becomes impossible to increase a chip`s density. So multiple chips is a possible solution, but it`s not a miracle one that will allow the existence of a holodeck or similar. Someone will have to create a radical solution to improve the performance of GPUs by several times compared to the current ones. Don`t ask me what`s the potential solution because i have no idea.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
H83:

It does, indirectly, because it increases the performance available and because after a certain point it becomes impossible to increase a chip`s density. So multiple chips is a possible solution, but it`s not a miracle one that will allow the existence of a holodeck or similar Someone will have to create a radical solution to improve the performance of GPUs by several times compared to the current ones. Don`t ask me what`s the potential solution because i have no idea.
Yeah but if we are at the absolute density limit you could just make a chip larger. Splitting the chip up just increases the size it takes up. The reason we do it is for cost, not to get around theoretical maximums. Edit: Before someone starts talking about Electron Mobility, the same thing would apply to mGPU/Chiplet designs as well so it's moot. In terms of talking about theory without cost constraint and density improvements, a monolithic, stacked design would be the optimal goal given the constraints of our current understanding of physics.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Larger chip or more chiplets means more power. Even when using chiplets.. it doesn't matter if each individual chiplet consumes only 100w, if you have 8 of them you still consume 800w, which is approaching space-heater level of power consumption. I'm not sure at this point which limit will be hit first: - the power limit for a consumer device (which will probably be dictated by US power socket 1500 W limit), so unless that country rewires their entire electrical network in tens of millions of homes, we won't ever see gaming computers consuming more than that. And I don't really see people installing dedicated 230V circuits just for their PCs... - the shrinking limit of silicon, which is also not far away, could be just 10 years of new nodes and that's it, it becomes prohibitively expensive to shrink even more, or physically impossible. - the money limit, where faster GPUs will be ultra-luxury items, so expensive that only the "1%" will afford them Unless WW3 starts, I think I'll be around to witness this moment of "peak electronics"
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
wavetrex:

Larger chip or more chiplets means more power. Even when using chiplets.. it doesn't matter if each individual chiplet consumes only 100w, if you have 8 of them you still consume 800w, which is approaching space-heater level of power consumption. I'm not sure at this point which limit will be hit first: - the power limit for a consumer device (which will probably be dictated by US power socket 1500 W limit), so unless that country rewires their entire electrical network in tens of millions of homes, we won't ever see gaming computers consuming more than that. And I don't really see people installing dedicated 230V circuits just for their PCs... - the shrinking limit of silicon, which is also not far away, could be just 10 years of new nodes and that's it, it becomes prohibitively expensive to shrink even more, or physically impossible. - the money limit, where faster GPUs will be ultra-luxury items, so expensive that only the "1%" will afford them Unless WW3 starts, I think I'll be around to witness this moment of "peak electronics"
Idk I guess I envision this completely differently. First there's still multiple generations room worth of shrink even with current tech. X-Ray Lith will get us much smaller - again current transistors are around 1000 atoms wide and in theory we can get them to one or two atoms wide and still function. Plus the distance between transistors can be massively reduced still as well. Then you have switching speed improvements when we move to different materials. This will probably take us to around 2035-40. But at some point the 1500w stuff wont matter because you wont have a GPU in your house anymore, you'll play games via server. I know that sounds bad and depending on where you live it might be further off.. but in Central NJ (USA) I can play Geforce Now in competitive FPS titles and the latency added via Now is less than 30ms. For 99.99% of games/players it's already more than adequate. In several generations that will probably be down to 10-15ms and it won't matter because a RTX8800 will be like $10,000 so you'll gladly take the 15ms at $30 a month. Once it's in the cloud though they can just build massive server farms that take obscene amounts of power that just scales demand instantly. It will massively increase efficiency because they'll only need to make one SKU and they can do all kinds of weird things with that design because they control the cooling/power/etc.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
^ to all that server bs: "You'll own nothing and you'll be happy" No tnx.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
wavetrex:

^ to all that server bs: "You'll own nothing and you'll be happy" No tnx.
I definitely agree but it's inevitable. Most people's entire lives are like this now; rent their apartment, lease their car, stream their media, etc. It's only a matter of time before hardware goes this way too and it makes sense from an efficiency standpoint and obviously a business one.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Denial:

Yeah but if we are at the absolute density limit you could just make a chip larger. Splitting the chip up just increases the size it takes up. The reason we do it is for cost, not to get around theoretical maximums. Edit: Before someone starts talking about Electron Mobility, the same thing would apply to mGPU/Chiplet designs as well so it's moot. In terms of talking about theory without cost constraint and density improvements, a monolithic, stacked design would be the optimal goal given the constraints of our current understanding of physics.
All you said is true but is there another option? There are clear limits to how big a chip can be and i think Nvdia as almost reached that limit with the 4090. Maybe they can make it a little bigger but not by much. They can make it on a better node but how many smaller nodes are available in the future? 2, 3, 4 or 5? I don`t know the asnwer but we are fast approaching the limits of how small nodes can be made. But just pairing a 4090 with another 4090 increases the performance by 100%. And a third one increases the performance by 200%! I know performance doesn`t scale perfectly, this is just an example. I just can`t see another solution other than a multiple chip approach, but i could be completly wrong.
wavetrex:

Larger chip or more chiplets means more power. Even when using chiplets.. it doesn't matter if each individual chiplet consumes only 100w, if you have 8 of them you still consume 800w, which is approaching space-heater level of power consumption. I'm not sure at this point which limit will be hit first: - the power limit for a consumer device (which will probably be dictated by US power socket 1500 W limit), so unless that country rewires their entire electrical network in tens of millions of homes, we won't ever see gaming computers consuming more than that. And I don't really see people installing dedicated 230V circuits just for their PCs... - the shrinking limit of silicon, which is also not far away, could be just 10 years of new nodes and that's it, it becomes prohibitively expensive to shrink even more, or physically impossible. - the money limit, where faster GPUs will be ultra-luxury items, so expensive that only the "1%" will afford them Unless WW3 starts, I think I'll be around to witness this moment of "peak electronics"
All true but i`m not even considering those factors otherwise there`s simply no solution at all...
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
H83:

otherwise there`s simply no solution at all...
That is precisely what I'm saying. We are rapidly approaching the point of not having any solution anymore to increase performance. And from that point forward, just software tricks will make new games more realistic, new psycho-visual algorithms that will cheat the brain into thinking that something looks better than it actually is (just like video encoding works), or perhaps foveated rendering will become a thing, with all monitors having iris-sensor cameras in them and sending back to the GPU the information on where the gamer is looking... or maybe it would work for 2 or 3 potential viewers... but then recordings of gameplay will look very weird to a viewer. I don't know how much it will be until that point, but between 10 and 100 years away, I think we're closer to 10 than to 100.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
H83:

There are clear limits to how big a chip can be
Not really - not in theory. Nvidia is close to the reticule limit but that's just a tooling thing. There's no issue building a bigger reticule it just costs a lot and the only one that's going to use it is basically Nvidia and really the limit is what people accept for a TDP and cost and no one is going to accept either out of a bigger reticule. As I said if you're talking about costs - then yes obviously MCM specifically is the future.. but if you're talking about "oh shit it's 2050 and we need a holodeck computer and our transistors design is completely maximized at density limit" then MCM isn't really going to solve anything. But it's going to be a long time before we get there. Talk to anyone that recently graduated in microelectronic engineering and they'll laugh about how people think shrinking is going to stop soon. It's basically the "there's no gravity in space" of the hardware/gaming community.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Denial:

Not really - not in theory. Nvidia is close to the reticule limit but that's just a tooling thing. There's no issue building a bigger reticule it just costs a lot and the only one that's going to use it is basically Nvidia and really the limit is what people accept for a TDP and cost and no one is going to accept either out of a bigger reticule. As I said if you're talking about costs - then yes obviously MCM specifically is the future.. but if you're talking about "oh crap it's 2050 and we need a holodeck computer and our transistors design is completely maximized at density limit" then MCM isn't really going to solve anything. But it's going to be a long time before we get there. Talk to anyone that recently graduated in microelectronic engineering and they'll laugh about how people think shrinking is going to stop soon. It's basically the "there's no gravity in space" of the hardware/gaming community.
I was under the impression that foundries were reaching the limits regarding the size of smaller nodes but if that is not happening anytime soon, then no problem, companies can simply continue to produce better parts on better and smaller nodes.