Remedy Shows RTX Raytraing performance cost

Published by

Click here to post a comment for Remedy Shows RTX Raytraing performance cost on our message forum
https://forums.guru3d.com/data/avatars/m/163/163032.jpg
Fox2232:

If they ever decide to make separate Raytracing accelerator, I may buy it.
remember the physx ageia add on card?
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
a lot of the reasons i keep saying the 20xx line is a rush job from Nvidia are echoed by the game devs. they should've had ES running just for this kind of research months ago. and/or Nvidia should've waited on a process shrink so the higher speeds would result in a lower frame hit. if this finding from Remedy is consistent across devs, then expect to find RT in fewer battle royale games and in more strategy games where frames/sec are not as vital. and... it's not like AMD isn't involved in RT they are, they just haven't released anything specifically for it and they will not need RT cores. MS is the force behind RT and MS doesn't pick sides (except Xbox lol) between AMD and Nvidia.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
You have to start somewhere when it comes to Ray Tracing. I say well done to Nvidia for this. It may not be at the Resolution that everyone wanted. AMD will definitely get into this, they have to to stay relevant in the market. There is no way they won't as Microsoft has it enabled in the latest version of Windows 10.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Caesar:

Looks like "film grain" effect. The amount of time it takes to render a FULL FRAME = 9.2 milliseconds.... (Impressive) 1,000 Milliseconds = 1 Second For 60 Frames Per Second, a full frame will cost approx 16.67 Milliseconds. So,........ to make calculations on shadings and lighting effects, it "costs" less time! 😕
Time to render a full frame, as stated in the article, is 42.2ms... the raytracing effects added an ADDITIONAL 9.2ms to the time needed to render a frame.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
HeavyHemi:

Max performance for what? Do you also turn down the rest of the settings to low for 'max performance'?
I don't enjoy playing games at lower FPS. For some games ~60FPS is my minimum, for some others it's 90FPS. So yeah, of course we turn down or disable some graphics settings. That's one of the main points of PC gaming. You can configure graphics settings so the game runs better. If a game that uses RT runs poorly at 1440p or 4K, and it only runs OK on 1080p, and even there it only achieves mediocre frame rates, why wouldn't I turn off RT?
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
I'm truly confident in the optimization possibility between AI-based denoiser and lower-resolution RT workload. (see the Star Wars demo from HH with DLSS running fairly well at high resolution despite extremely expensive post-process effect like convolution bloom, and ultra high quality depth-of-field for cinematic experience etc...) We're still extremely early for this tech and no one can deny Nvidia is taking a risk shipping those products. All the implementation demos we've seen so far are rough first pass after a couple weeks with the cards, barebone drivers, early test version of windows with DXR support (as it was just released last week)... Only time will tell, but we can already see the enthusiasm of major studios like DICE/EA (and their R&D SEED division)/Remedy/Epic Games... I also see a lot of possibility for indie studio using unity/UE4 to get access to otherwise extremely expensive features like realtime GI, faster lightmap baking etc...
https://forums.guru3d.com/data/avatars/m/123/123440.jpg
What is "Raytraing"?
data/avatar/default/avatar20.webp
Maddness:

I must be seeing something different here. I actually thought it looked pretty damn good. Seeing the reflections in almost everything rendered on the screen, but not even in the picture was amazing. I can't understand why there is so much negativity towards Ray Tracing. Does everyone want to stay in the dark ages, this is the future people.
This is the future, of course, a far away future. Hardware development should be working in tandem with software devs. What happened is that Nvidia surprised everybody, throwed the tech in the lap of devs out of nowhere and is hyping up the near release of games using RTX tech. By the way, in order to get this logo in the box of your game ,only to show a few scenes or locations using RT/DLSS is enough. Why had DICE to tone down the feature in order to keep the framerate & to Improve realism? For example, think about the surfaces in video games wich are unrealistically plane so they lack the roughness of real materials. Some objects would be even interpreted as almost perfect mirrors by the card, wich combined with the already proven unbalanced HDR aproach by DICE in BF1 (never look at the sun with HDR on) can create Battlefield V: blinding company. Rotterdamn already look way too clean, specially for city ravaged by war, and with RTX on, objects like bricks and wood look too shiny.
data/avatar/default/avatar04.webp
IGN--------------------> Too much Ray Tracing.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
DrKeo:

Isn't the whole idea of the rt and ai cores is that they run in parallel to the rasterization?
From what I understand, much of the silicon is shared between raster and RT. So when you do RT, you can't do raster at the same time. And this sounds logical, since you'd have parts of the GPU do absolutely nothing in non-RT games.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Clawedge:

remember the physx ageia add on card?
Yes, they worked and were great. They were just not popular enough. nVidia took that and trashed it. With Ageia, those games did exactly what they were meant to do... play well, look awesome for time. That's definitely not something one can say about results of nVidia's tampering.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
sykozis:

Time to render a full frame, as stated in the article, is 42.2ms... the raytracing effects added an ADDITIONAL 9.2ms to the time needed to render a frame.
So, so the RTX fails?
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
RealNC:

From what I understand, much of the silicon is shared between raster and RT. So when you do RT, you can't do raster at the same time. And this sounds logical, since you'd have parts of the GPU do absolutely nothing in non-RT games.
I thought it's independent, just like compute, so in theory it can do what 14tflops raster + 14tflop compute + what ever tensor core perf. 100pflops? From what I see now It's just too slow., not enough tensor cores to keep up with rtx.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
-Tj-:

I thought it's independent, just like compute now. From what I see now It's just too slow., not enough tensor cores to keep up with rtx.
At 1st they made it look independent, But I saw video with engineer from nV who clarified what is able to run in parallel and what is not. Spoiler: "YT"
[youtube=YNnDRtZ_ODM]
31:50 - parallelism part
https://forums.guru3d.com/data/avatars/m/145/145154.jpg
This may be the future but it just isn't ready for prime time yet. We will enjoy this feature in a few generations. I see no reason for anyone to buy a 20xx series simply because of Ray Tracing when it's not even mature enough to be used yet. That is a massive hit on performance.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Nice to see. I guess in due time we will see what AMD has to offer in comparison. Would be nice to see team red on top of the performance per watt scheme of things once again..... Make the "others" tighten up that shot group and make things more competitive price/performance wise!! PLEASE AMD bring some big guns sooner than later and shut down this BS excuse of pricing that has come about today!!!
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
-Tj-:

Or gpu still too slow, nvidia could have added more tensor cores, but it didn't. Gotta save some for next gen to make a bigger wow factor. Ok thermals and power is a limitation, can't argue there.. Well there is also a option to have more cores at lower clocks. But why should they do that if it will make e.g. flagship TitanV look much much worse, so far its still on top when it comes to using its Tensor cores. Just saw this @ anadtech, half-precision which uses 16bit or or even faster 8bit float https://www.anandtech.com/show/13431/nvidia-geforce-rtx-2070-founders-edition-review/13 Vega has its punch too by other compute tests, quite competitive. Wonder what it can do if its tuned extra for RTX render with those units 😀
Lol what are you talking about? None of the RTX games so far even utilize the tensor cores and even if they did the performance impact from RT isn't even the denoising process. So adding more tensor cores, even if they were utilized, wouldn't even improve performance.
Fox2232:

Yes, they worked and were great. They were just not popular enough. nVidia took that and trashed it. With Ageia, those games did exactly what they were meant to do... play well, look awesome for time. That's definitely not something one can say about results of nVidia's tampering.
https://images.anandtech.com/graphs/asus physx_05040650504/11896.png https://images.anandtech.com/graphs/physxsbigbreakunrealtournament_120807062915/16150.png https://images.anandtech.com/graphs/bfg physx_05170620555/11966.png They honestly did neither. Even in Ageia's absurd maps/demos there was physics enabled on everything just for the sake of it and the performance was bad: https://images.anandtech.com/graphs/physxsbigbreakunrealtournament_120807062915/16148.png Even now devs/third parties are more than capable of building physics middleware for GPUs utilizing DirectCompute and whatnot and no one does it.
data/avatar/default/avatar24.webp
HeavyHemi:

Max performance for what? Do you also turn down the rest of the settings to low for 'max performance'?
Plenty, like Battlefield 5, and certainly no competitive Battle Royal player will use it. RT will be available in BFV, but i bet almost NO ONE uses RT in this title. You cannot afford to run around at 30-60fps. YOU WILL DIE. Now if I'm playing Tomb Raider single player...oh wait, I have a 1440p monitor and I probably wouldn't use for those titles either. Competitive racing simulator games? Nope, not a chance. FPS and input lag are too important. Maybe Fallout or something like that, but for me, I'd turn on RT in maybe 1 out of 10 titles I play. As a 1080 ti owner, only the 2080ti makes any performance sense to me, and really only for VR, as I have a G-sync monitor for non-VR titles. But, $1200-$1400 for a 2080ti. Rubbish. Don't get me wrong, this new Tech is amazing. However, it's not ready for the gaming masses, and the prices are insane. Tons of 2080's in stock at Newegg. 2070s will sell even worse. The Ti's are sold out because the Ti's are the only ones that make sense, but should be less than $1000.
data/avatar/default/avatar21.webp
The sad truth is that Turing is too expensive for the conventional performance upgrade it provides and too slow for its marquee feature (ray-tracing). By the time developers get deep enough into ray-tracing, the 7nm GPUs will be out with better HW implementation and (hopefully) more reasonable pricing.
data/avatar/default/avatar23.webp
Unfortunately they did not optimize the demo ... will be nice to see what the final optimized product looks like next year.