Raytracing without RTX: Nvidia Pascal receives DXR support via driver

Published by

Click here to post a comment for Raytracing without RTX: Nvidia Pascal receives DXR support via driver on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
DrKeo:

but the RT cores are very important in order to bring RT to the mainstream.
I wonder about that. Between Huang's (artificially) fabulous RTX On/Off demo back then and the fact the first RTX game struggled to work decently on the super expensive 2080 Ti with RTX on, leaving customers with the permanent notion Turing RTX is kind of a failure, I'm not so sure it was such a success you could say it's important in bringing raytracing to the mainstream. If anything, Nvidia managed to make it seem like raytracing is nigh impossible in the mainstream (2080 Ti with its price is far from mainstream). It's up to others, like the Crytek demo with a far cheaper GPU, to show it can me made to work in mainstream. Without RTX.
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
Interesting move, wondering why some suggest it might be a move to counter the crytek presentation, not like this kind of things are decided over night. Fact is, they realized some ray tracing features run on GTX 10XX serie. Got to keep in mind you can run the raytraced pass on a lower resolution, therefore lots of GPU can use raytraycing, but only the RTX cards wll be able to perform at the higher resolution. The crytek presentation is inspiring as it shows that without dedicated hardware you can already get incredible results, just imagin how far you can go with those RTX cards then.
https://forums.guru3d.com/data/avatars/m/267/267787.jpg
It amazes me to see how Nvidia will fcuk the consumers over time after time like this. I bet you if we never saw the Crytech demo where AMD could do DXR with current hardware and no special cores then Nvidia would have never made a driver update like this. Oh well it’s a good thing so now everybody know that you don’t need special hardware to execute DXR on a DX12 compatible GPU. I never understood the reasoning behind special cores that can do only RT and AI when you could have just build a bigge GPU with more total core count and do everything on it.
https://forums.guru3d.com/data/avatars/m/106/106401.jpg
https://www.nvidia.com/en-us/geforce/news/geforce-gtx-ray-tracing-coming-soon/ https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/news/geforce-rtx-gtx-dxr/geforce-rtx-gtx-dxr-shadow-of-the-tomb-raider-performance-850px.png Looking in the Test images- can it be that the Performance uplift of turing RTX 2080 (W/O RT cores) over Pascal GTX 1080Ti comes from FP16 2:1? (20,137 GFLOPS FP16 on RTX 2080 vs 11,340 GFLOPS FP32 on 1080Ti) Also we already know that RT need very fast memory with low latency- so lets wait to see how this limited RT affect will perform on Pascal vs Vega 56/64 and Radeon VII with HBM and FP16 2:1. For me- Just give me reflection in games like Rainbow Six Siege to see Caviera sneaking on me and I am Happy 🙂.
https://forums.guru3d.com/data/avatars/m/29/29917.jpg
Moderator
Yes! GPU baking 😀:D
data/avatar/default/avatar04.webp
So basically, Turing main unique feature "ray tracing", is going to be supported by previous generation as well? Why would anyone get an RTX card now? it literally makes no sense. does that mean that the 1660/ 1660Ti will also support DXR? haha so awkward
data/avatar/default/avatar37.webp
RooiKreef:

I never understood the reasoning behind special cores that can do only RT and AI when you could have just build a bigge GPU with more total core count and do everything on it.
The reason is simple: General purpose hardware is always going to be slow, while specialized hardware is really fast. Thats why RT cores and Tensor cores exist, they are specialized for their specific tasks, and do them at incredible speeds. NVIDIA showed some other comparisons, with one monster Pascal GPU (not available to the public), which is basically the shader count of 4 1080 Tis glued together, and that barely matches the performance of a 2080 with RT cores. You would never reach that speed with conventional shader-only designs, because the consumer chips would never get that big. Even if they used all space that RT and Tensor cores used for normal shaders, you would maybe gain 20% more shaders, not 4 times as much. And thus, we have dedicated hardware.
data/avatar/default/avatar34.webp
HardwareCaps:

So basically, Turing main unique feature "ray tracing", is going to be supported by previous generation as well? Why would anyone get an RTX card now? it literally makes no sense.
You would get a RTX card because on previous gen cards this is going to be low quality effects and slow. Without the dedicated RT cores, it can barely do anything without seriously dropping FPS (and you thought DXR on RTX cards was slow already? Think again!) So basically, the same reason you always get a new card: Those are faster. This announcement really doesn't change anything. People that are interested in Ray Tracing effects will still want to get a RTX card, because its the only way to really make use of them. People that are not really interested can get a 16-series, or try to scavenge up an old 10-series, but as a bonus they get a taste of Ray Tracing, entry-level quality and low performance. If anything, this makes RTX look better, since you can truely see the performance cost it would have on previous generation of GPUs. And it also opens the door for more developers to try to work with it, since the market of people with DXR support suddenly vastly increases.
data/avatar/default/avatar09.webp
To me it seems the reason for this decision is just to get more game developers to add DXR Raytracing into games. Even if it's just for a few small effects so it results in acceptable frame rates on 10 and 16 series. Then the RTX cards will naturally see significant performance increases in these games compared to AMD and NV's previous cards.
data/avatar/default/avatar05.webp
nevcairiel:

You would get a RTX card because on previous gen cards this is going to be low quality effects and slow. Without the dedicated RT cores, it can barely do anything without seriously dropping FPS (and you thought DXR on RTX cards was slow already? Think again!) So basically, the same reason you always get a new card: Those are faster. Its like saying that a 980 is just as good as a 1080, since it can play the same games. Sure, it can, just slower.
The Turing cards also struggle to run RTX. ofc Pascal would be completely useless with DXR enabled but for the avg Pascal owner, he hears now that "Pascal will support ray tracing" that's not great in terms of marketing.
data/avatar/default/avatar32.webp
Also since developers know that Pascal is way way more popular than Turing, they can simply use DXR very very lightly for minor things, focusing on the main consumer base(also AMD will probably support DXR without dedicated acceleration hardware) and might offer better visuals for RTX users but that completely removes the main feature, the selling point of Turing, which is Ray Tracing.
https://forums.guru3d.com/data/avatars/m/142/142982.jpg
To Me this is not a hilarious move, but a normal one, Nvidia knew about this but they were hoping that this will happen later on so they can squiz some money out of costumers. Maybe this is one of the facts why they also launch the 16xx series. All graphic cards that fully suport DirectX 12, suport Microsoft's DXR-API (the API responsible for real-time ray tracing) , we only need to know the impact to performance, but ther rest is there. And Nvidia needs to bring support for this tech just because this will be supported by every game developer. Also the new consoles will suport this. The 2xxx series bring some nice thing to the tabel but until we see a real comparation we can't say if this is not ok or that it has limitations, and so on. Now the real race for DXR Real-Time Ray Tracing starts.
data/avatar/default/avatar26.webp
They did it because they had to get RTX rolling and Turing failed to do so. they are forcing ray tracing as hard as they can. will fail.
https://forums.guru3d.com/data/avatars/m/275/275145.jpg
HardwareCaps:

but that completely removes the main feature, the selling point of Turing, which is Ray Tracing
No, it won't. With the RTX cards you can play with Ray Tracing a frames more or less decent, with the GTX cards not. Unless the purpose is to make a presentation of the game in slideshow. The RTX cards would be irrelevant if they ran the game with Ray Tracing with the same performance and same visuals as GTX cards, which is not the case. So, the selling point is still there, nothing changes.
RzrTrek:

Another disappointing keynote with way too much focus on AI, data analytics and cloud "gaming"...
What do you expect from a conference focused in AI e ML? That's what GTC is... It´s like going to a cooking show and then complain they talk about cooking!
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
dannyo969:

seems like a marketing pitch, pascal doesnt have RT cores or Tensor cores. I question Nvidias intentions sometimes.... Wouldnt they want people to buy new RTX cards? From a business standpoint, It seems as if this would slow RTX sales more. Dont even get me started on the 1660ti and 1660, just why.... Theyre competing with their own cards. Buy a damn rtx 2060 or gtx 1070.
Am wondering if this is Nvidias response to Cryteks RT reflections demo where the vega 56 seemed to do so well. It sort of caught Nvidia with their pants down. So they had to cobble up some response to show Pascal is still relevant (even on some feeble level) vs old Vega on the RT front.
data/avatar/default/avatar05.webp
kings:

No, it won't. With the RTX cards you can play with Ray Tracing a frames more or less decent, with the GTX cards not. Unless the purpose is to make a presentation of the game in slideshow. The RTX cards would be irrelevant if they ran the game with Ray Tracing with the same performance and same visuals as GTX cards, which is not the case. So, the selling point is still there, nothing changes.
you clearly didn't read what I wrote: " developers know that Pascal is way way more popular than Turing, they can simply use DXR very very lightly for minor things, focusing on the main consumer base(also AMD will probably support DXR without dedicated acceleration hardware) "
https://forums.guru3d.com/data/avatars/m/243/243189.jpg
@DrKeo - Ok that is what Nvidia is saying, which it is turning out to be different to what is possible and what might be. If they are saying the same workload can be done using INT32 part of gpu chip, then what are RT cores offering over this, and what extra hurdles do developers need to overcome in order to make this work without tanking gaming performance? If these are features native to DX12 and general implementation, why would you spend the extra effort on developing for these cores generally, but even more so if you are planning to port games to consoles which are AMD based hardware? Similar to what HardwareCaps says, I think this completely undercuts the whole RTX line. Not only is raytracing possible another way, but also can be implemented in a more developer friendly way, and across more hardware. In many ways, Nvidia might have done better to push the RTX narrative longer and locked out other changes a bit like Physx before giving up the ghost, rather than 6 months after the release of an expensive new line? If I bought a 1000 USD graphics card for this and then saw this, I would be pretty annoyed at this announcement. You talk about raytracing being better through having dedicated hardware implementation for it, but RT core use doesn't seem to be compatible yet with a proper, decent FPS gaming experience, which means that this is kind of a moot point as they are not of real benefit yet to the end user yet. If this DX12 feature can be applied to these cores and bring about a massive increase in performance, then fair play to Nvidia, they will save themselves, but they have not demonstrated this yet. Also Nvidia did not offer a comparison for the 1660Ti to their RTX cards, so no way to support or refute this. Comparing it to a 1080Ti does not really support the claims as it has little to do with what was announced for their new budget cards that supposedly lacked this feature natively, and yet can be implemented with a driver update. The paranoid side of me wonders if there will some careful balancing of raytracing performance on these cards to not outperform their other gpus, but if AMD cards end up performing well in this, there will be some serious confusion as to what cards offer real bang for bucks.
https://forums.guru3d.com/data/avatars/m/273/273822.jpg
BlueRay:

If they really wanted RTX to take off they would have priced their new RTX cards way lower to encourage mass adoption then the developers would care to implement more of it. But no. Let's charge an arm and a leg and pay off few devs to implement something only to warrant the purchase of few rich people. RTX will die or be replaced by something else if they continue like this.
way lower? When the 2080 costs the same as RVII, why would they price it lower? Think before you post.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
DrKeo:

Anything non-dedicated hardware can do dedicated hardware can do better. Without NVIDIA's RTX project we wouldn't have RT in Cryengine. NVIDIA is the reason DXR even exists, they approached Microsoft and pushed them to do DXR because they wanted to make the RTX cards. The 2060 RTX, a 350$ card, performs x2 better than the 1080ti in Metro with RTX/DXR on @1440p. If that isn't bringing RT to the mainstream, what is? Buying a 700$ 1080 ti and getting half the performance of a 350$ card seems somehow better to you?
Why are you comparing old and new generations anyway? The 1600 series has shown nice performance in traditional GPU work as well. Furthermore, you are only comparing Nvidia tech against... Nvidia tech with an obvious Nvidia bias as they want to sell their expensive RTX cards (which, btw, Nvidia itself has confessed haven't been selling that well). You are basically saying a game built to support the specific tech of dedicated hardware is not running so well on non-dedicated hardware. Really, now, who would have guessed? I'm not against raytracing. In fact before the outrageous prices were announced, I was seriously considering 2070. However, since then the raytracing price-performance based on the RTX hasn't looked too impressive. Right now I'm hoping raytracing can take a more reasonable path. I'm not believing the Crytek demo blindly, but if it works half as good as the demo suggested, it could be worth developing. In the end, if it's not dedicated hardware, it won't sit still and useless in non-raytracing games.
https://forums.guru3d.com/data/avatars/m/29/29917.jpg
Moderator
HardwareCaps:

you clearly didn't read what I wrote: " developers know that Pascal is way way more popular than Turing, they can simply use DXR very very lightly for minor things, focusing on the main consumer base(also AMD will probably support DXR without dedicated acceleration hardware) "
You can already use your GPU to make raytraced shadows in game engines. Unity does it through a GPU lightmapper and you get a 100x jump in speed sometimes/ in the number of rays you can shoot out. But even that, you'll have to wait for a bit to get a single frame. So no. Even "small" things, need a lot of calls and work to do. A single puddle that will only reflect the player and nothing else, will add a ton more of draw calls. It's the difference between having dedicated hardware doing things in parallel with having to do a pass, then empty the registers, load new instruction and do something else, then back to what you were doing before. What nVidia did here is to allow older cards to run raytracing, people see how hard it is on the previous super duper cards and say "oooh. ok."