3DMark DirectX Raytracing Feature Test Benchmarks

Published by

Click here to post a comment for 3DMark DirectX Raytracing Feature Test Benchmarks on our message forum
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
Denial:

But you don't have to render an entire scene in RT in order to use RT, as evident by a number of games that use RT and get over 60fps on not a 3090. So what your actually trying to say is, full scene RT is not ready - but that kind of invalidates your earlier point about the 2000 series because they were never really intended for full scene RT.
It not ready in general period if it not full RT is not RT and peformance hit is to huge and that on card that cost how much? Not gona see anything in game anytime soon in true form, We all remember how heavy the "RT" was in Quake II RTX a game that would run 500+ fps on current day cards and probably 4k without RT and probably 1000+ at 1080p with out RT but RT on that 1080p was run barely past 60 fps and that wasnt even true RT RT is Marketing name at this point still even the console "used" RT as means to sell it
data/avatar/default/avatar31.webp
No doubt it's being used as a marketing tool - as pretty much any graphics technology would be. But it seems pretty clear that going forward we are moving towards full ray tracing... whether we get there in 10 years or 20... that's where we're going.
data/avatar/default/avatar06.webp
Stormyandcold:

However, performance looks good for Nvidia RTX3090. I'm excited for the future when full RT games will be viable.
We are getting there. Without existing RT hardware we would not have two full RT games (Minecraft, RTX Quake 2).
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
I been hearing bout RT for years BEFORE 2xxx "introduced" I been hearing about OLED more then 2 decades and it still something not widely available due to price. I like the IDEA of RT but anything but ready 😱
data/avatar/default/avatar11.webp
Its funny how nvidia claimed 2x more ray tracing performance , second generation etc... YET there is ZERO PERCENT improvement .
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
tsunami231:

if it not full RT is not RT
I fundamentally disagree with this and thus the rest of your post isn't really relevant unless you can convince me otherwise. There's tons of examples of feature sets used in limited cases in order to improve graphics. For example SVOGI used for GI in various games can also be used to do reflections, shadows, etc but mostly isn't. No one sits there and says "well game isn't fully using SVOGI so it's not really SVOGI". You cast a single ray, you're using raytracing. You cast a few thousand rays and now you can build a reflection model that's more accurate than any previous nonRT implementation. The question becomes do you have the hardware to do that in a performant way? The answer is now yes.
kapu:

Its funny how nvidia claimed 2x more ray tracing performance , second generation etc... YET there is ZERO PERCENT improvement .
To be clear they didn't claim that. They claimed RT cores were 1.7x faster but RT workloads aren't entirely ran on RT cores, they touch various other parts of the chip which might not necessarily be 1.7x faster. Further they showed this slide: https://hexus.net/media/uploaded/2020/9/1c6c4b25-1d6c-480f-beb8-8d41fc3ce581.jpg Which clearly shows a number of instances where it's not 2x.
https://forums.guru3d.com/data/avatars/m/279/279306.jpg
Is not Nvidia Hopper and AMD RDNA3 rumored to be first generation GPUs to use MCM design kinda similar to current AMD CPUs? Kinda makes sense if they do, so they can make more smaller chips faster and cheaper with less failed ones like AMD is currently is doing with Ryzen, Threadripper and Epyc. If so it is going to be interesting to see how performance and RT stuff scale hopefully without the penalties and the need for multiGPU profiles that multiGPU systems currently have. Maybe something like RTX 4060Ti / RX 7700 (2 core) RTX 4070 / RX 7800 (4 core) RTX 4080 / RX 7800XT (6 core) RTX 4090 / RX 7900XT (8 core) with something similar to AMDs Infinity Fabric connecting the cores. 🙂
data/avatar/default/avatar02.webp
Lily GFX:

Is not Nvidia Hopper and AMD RDNA3 rumored to be first generation GPUs to use MCM design kinda similar to current AMD CPUs?
I could be wrong but believe Intel will likely have the first GPU's designed using MCM architecture.
https://forums.guru3d.com/data/avatars/m/279/279306.jpg
pharma:

I could be wrong but believe Intel will likely have the first GPU's designed using MCM architecture.
I am not sure either but it would make sense for Intel to go for a MCM design, more so if AMD and Nvidia is doing the same for their next gen. 🙂
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
Got what I expected hahaha, 34.14FPS on my RTX 2080Ti Strix:D,probably will be able do 35FPS but that's it Denoise is nice but you literally loosing lots of details like normal maps or bump maps etc, I prefer not to use it in pro works Hope this helps Thanks, Jura
data/avatar/default/avatar36.webp
:) , The processor …..yes I cannot wait , only few days to go .
https://forums.guru3d.com/data/avatars/m/271/271781.jpg
Like others said, this test is the blandest, most unimaginative thing I've ever seen 3DMark release. UL is probably to blame.
https://forums.guru3d.com/data/avatars/m/282/282392.jpg
Those 3090 scores are damn impressive, anything RT related the 30 series shine.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Stormyandcold:

Well, it means we'll live to see it happen! I can't wait.
Problem is in transistor count. Currently it is beyond what any forge can make at reasonable price for consumer enthusiast. We'll need maybe 5nm maybe 3nm. But definitely we will need chiplets. And I have feeling that chiplets will have like 99% performance scaling on such workload. @tsunami231 : Benchmark does not use static image, it uses static scene. That's because people need something to look at. In transition from one camera view to another, you can see that even 20 rays per pixel is far from enough. And when I imagine how transparent, translucent and dust like objects would "artifact" from not enough rays/bounces... I think that before we get Full raytracing, we'll get temporal-aliased raytracing. Where each frame GPU calculates amount of rays it can within limited frametime, and then result will get processed with result from last frame. And this result will be baked into special surface textures on each object. (To be used next frame again. Maybe even sequence of more temporal surface textures so there can be weighting based on age of data.) And that will not only require extra VRAM, but it will require extra memory bandwidth. @Denial : You know how I look at it. When RT content in game does not provide raytracing like visuals, I do not consider it reasonable improvement. I want visuals to be closer to photorealistic techniques. Not accurate reflections of ugly rasterized scene as that reflection would be nothing more than accurate reflection of ugly objects. No matter how many ray-stickers people place on product, if I see nothing better than traditional rasterized visuals, I'll not be satisfied.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
[youtube=2gdzbI19SwY] btw, i cant understand how they are standardized.....these benchmark.....system....o_O
https://forums.guru3d.com/data/avatars/m/283/283103.jpg
This is quite nice.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Stormyandcold:

I've just seen the actual test on YouTube and I'm not impressed tbh. Nothing is moving except the camera. However, performance looks good for Nvidia RTX3090. I'm excited for the future when full RT games will be viable.
I think we need someone to explain what is going on in the benchmark for people to actually see it as impressive. Because it really is impressive what's going on. We have actual focal depth changes based on physicality, not someone plopping down a blur filter on the background.
https://forums.guru3d.com/data/avatars/m/260/260114.jpg
CPU: 5900XT 5GHz w/4000MHz DDR4 CL14 :P (We are ready for Next-Gen)
data/avatar/default/avatar21.webp
I want to see that 6900XT in that benchmark SO BADLY!