AMD confirms that Resident Evil Village will have Ray Tracing support on PC

Published by

Click here to post a comment for AMD confirms that Resident Evil Village will have Ray Tracing support on PC on our message forum
data/avatar/default/avatar30.webp
Neo Cyrus:

It's on the consoles, just very unimpressively done... and at 30 fps at best. It might be 2 generations before RT is really worth thinking much about, but as it stands the RT reflections (when implemented well) are transformative enough that I'd take a 3080 over a 6800 XT given the choice, despite the joke 10GBs of inevitably restrictive RAM.
I took 6800 over 3070 because of far superior raster performance at 1440p and unlimited vram. This round raster is the game. In 2-3 years we will see. Right now RT is close to useless, AMD made right choice to skip it on this generation in my book.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Agonist:

Funny. AMD is doing things and people still hate on them. AMD will never have a true playing field in gpu land with nvidiots amuck.
True, but it's also true that AMD has always been behind in the software ecosystem, even when the time were not lean.
MonstroMart:

Will be interesting to see the advantage nVidia retains in RT when a game is optimized for AMD hardware.
The simple answer is none. The tensor cores are used for demonising, not just DLSS, current AMD cards just don't have the hardware. Even if they did, their actual Ray tracing performance is worse. What's impressive and flies over the heads of many people is that AMD essentially seems to have hacked this in a traditional design by reusing texture units. I find impressive that they get even the performance they get. For their need gen they might as well improve small parts of the traditional cores and if they commit to drivers and extra AI/raytracing hardware, they could really catch up. As for the game itself, say what you want about DLSS, but it could take use it, on top of FidelityFX (like Cyberpunk does).
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
PrMinisterGR:

The tensor cores are used for demonising, not just DLSS, current AMD cards just don't have the hardware. Even if they did, their actual Ray tracing performance is worse.
Tensor cores have not been used for Denoising in anything so far.
data/avatar/default/avatar04.webp
ViperAnaf:

The current AMD GPUs are showing less RT HW accelerated capability than the first gen RTX cards.... so how the hell are they going to support this? I predict it will be a very limited, "burly notice its on" kind of setting....
well on the raytracing on / off video i did have some ....wait what actually changed moments so yea...might be on to something there
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
PrMinisterGR:

Not so sure this is the case since Ampere, really. But no information is out for anything certain yet. There is an interesting thread in resetera. Nvidia's Marbles RTX Hinting at Tensor Core Denoising? | ResetEra
I'm 95% sure they denoise in FP16, which runs on Tensors for both Turing/Ampere - so they do get utilized technically but presumably if Nvidia didn't have Tensor cores they'd use mixed math FP32 units, similar to GP100/AMD's Rapid Packed Math so they aren't really required for what Nvidia is doing with the denoise. I think it's more of a question of if it's AI denoising or not and not so much "tensor" denoising. Seems like the answer is still no for RTX in games. OptiX uses it but it has far more time to complete the AI denoising process then a real time game engine.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Fox2232:

If it was in game, nVidia's marketing would use it as selling point already.
Bingo!
Denial:

I'm 95% sure they denoise in FP16, which runs on Tensors for both Turing/Ampere
Unless something changed, the Denoisers used in custom engines so far have all been in house on the general purpose units rather than Tensor or FP16. FP16 is a function of Tensor, but its not by itself useful for Denoising in realtime, I think the most use of Tensor to date, has been in DLSS.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Denial:

I'm 95% sure they denoise in FP16, which runs on Tensors for both Turing/Ampere - so they do get utilized technically but presumably if Nvidia didn't have Tensor cores they'd use mixed math FP32 units, similar to GP100/AMD's Rapid Packed Math so they aren't really required for what Nvidia is doing with the denoise. I think it's more of a question of if it's AI denoising or not and not so much "tensor" denoising. Seems like the answer is still no for RTX in games. OptiX uses it but it has far more time to complete the AI denoising process then a real time game engine.
@Astyanax I thought that the above was indeed the case. I know you both say it doesn't matter, and that general HW can do this, but I can't help but wonder if having the hardware do this leaves even more of the GPU available for other tasks.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
PrMinisterGR:

@Astyanax I thought that the above was indeed the case. I know you both say it doesn't matter, and that general HW can do this, but I can't help but wonder if having the hardware do this leaves even more of the GPU available for other tasks.
Theoretically it would for sure, but theres a lot going on in the ampere architecture and to be honest raster performance is hampered by TMU and ROP totals not being much above Turing, actually i think the 3080 has the same TMU total as the 2080ti right? Turing was an investor refresh to shut them up, Ampere is an architecture stepping stone to what i assume will be next level performance for traditional and RT enabled titles, if they get to release on a smaller node and push up cache and non shader units the next architecture might actually perform well across the board.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Just doing a quick check, the only reference to tensor core usage is in nvidia's DLSS plugin, i can't see any mention of it on the UE forums.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Neo Cyrus:

More than just RT shadows? That's a first, shadows are the only things ever allowed in AMD titles before, because something that actually makes a difference, like reflections, causes a 6800/6900XT to implode. RT capability was an afterthought on the current Navi implementation or AMD are just incompetent, it's one or the other. More than just shadows for once... I'll believe it when I see it.
RT was an after thought, they were focused more on matching Nvidia in rasterization performance again. Something which they haven't done since the 290X days. People seem to forget this, the same goes for their CPU side. They wanted to be more competitive with Ryzen as they hadn't been competitive since like 2006! Beating Intel on all fronts with the first gen Ryzen would of been a monumental task, so they created something that beat them in other areas like core counts, prices, socket support, etc. Then as the sales and money flew in they knuckled down on IPC increases, better core layouts, smaller nodes, etc and finally caught up and surpassed Intel in 3 years! After being basically dormant, or classed as the budget brand for nearly 12 years! Now with their GPU division comparing the 6000 series to the 5000 series they have doubled rasterization performance (6900XT is basically 2x the 5700XT) this is within 1 year! This performance increase within 1 year is insane, normally we get +30-40% after 2-3 years, not +100%. Now couple that with more vram, better energy efficiency, more innovative memory setup with IF, and of course RT was going to be an after thought. They didn't want to sacrifice even more die space for RT cores/accelerators and risk crippling performance in other areas. Give it time, and if AMD stay on the same track as they have been with Ryzen, then there is no reason why Radeon won't match or beat Nvidia in ever department. And who knows, Intel with their Xe cards could also bring in even more competition and the war can truly begin! 😀
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
PrMinisterGR:

True, but it's also true that AMD has always been behind in the software ecosystem, even when the time were not lean. The simple answer is none. The tensor cores are used for demonising, not just DLSS, current AMD cards just don't have the hardware. Even if they did, their actual Ray tracing performance is worse. What's impressive and flies over the heads of many people is that AMD essentially seems to have hacked this in a traditional design by reusing texture units. I find impressive that they get even the performance they get. For their need gen they might as well improve small parts of the traditional cores and if they commit to drivers and extra AI/raytracing hardware, they could really catch up. As for the game itself, say what you want about DLSS, but it could take use it, on top of FidelityFX (like Cyberpunk does).
At this point in time anyone saying DLSS is not great is just being dishonest. DLSS 1 was not great. DLSS 2 is though. The main problem with DLSS is it needs to be supported by the game.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
kapu:

I took 6800 over 3070 because of far superior raster performance at 1440p and unlimited vram.
very much limited on 256-bit bus.
kapu:

This round raster is the game. In 2-3 years we will see. Right now RT is close to useless
it's useless without image reconstruction played with RT on with a 2070 Super and it was a fluid framerate,above 50 fps in control,all settings max,1440p dlss 2.0
kapu:

AMD made right choice to skip it on this generation in my book.
thing is,they didn't.they still have RT hardware acceleration you're paying for but it's pretty useless.
data/avatar/default/avatar12.webp
cucaulay malkin:

very much limited on 256-bit bus. it's useless without image reconstruction played with RT on with a 2070 Super and it was a fluid framerate,above 50 fps in control,all settings max,1440p dlss 2.0 thing is,they didn't.they still have RT hardware acceleration you're paying for but it's pretty useless.
cucaulay malkin:

very much limited on 256-bit bus.
cucaulay malkin:

very much limited on 256-bit bus. it's QUOTE]
cucaulay malkin:

very much limited on 256-bit bus.
Much less limited than 8gigs on 3070.
data/avatar/default/avatar10.webp
I'll give a shit about ray tracing when I can enable it and it doesn't shit tank my FPS (from both red and green). Years ago I found the wonders of 144hz+ gaming, can never ever go back. On my 3080, with RT enabled DLSS is a must. Still low frames and upscaled image quality so for me personally it's still useless. I have to really decrease the quality of a game to get anywhere and again at that point I'm sacrificing so much to get some shadows and light reflections. On my 6900XT, I don't even bother with any type of RT and the crazy rasterization performance combined with true non-upscaled image quality. I can easily hit the 144hz+ cap on most games and they look fantastic, so much more crisp than my 3080. I feel this card is the true winner and that Nvidia should have released a non-RT version of their card as well as the RT version for those that want to play their games at low/medium quality and 60fps. TL'DR: RT is still shit.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Daytona675:

I'll give a crap about ray tracing when I can enable it and it doesn't crap tank my FPS (from both red and green). Years ago I found the wonders of 144hz+ gaming, can never ever go back. On my 3080, with RT enabled DLSS is a must. Still low frames and upscaled image quality so for me personally it's still useless. I have to really decrease the quality of a game to get anywhere and again at that point I'm sacrificing so much to get some shadows and light reflections. On my 6900XT, I don't even bother with any type of RT and the crazy rasterization performance combined with true non-upscaled image quality. I can easily hit the 144hz+ cap on most games and they look fantastic, so much more crisp than my 3080. I feel this card is the true winner and that Nvidia should have released a non-RT version of their card as well as the RT version for those that want to play their games at low/medium quality and 60fps. TL'DR: RT is still crap.
Meh, it's really fine imho. I think it really depends on developers having a proper time to do this in an engine/game from scratch, instead of it being just an addon. If you have an eye for subtlety, it's already pretty great.
data/avatar/default/avatar30.webp
PrMinisterGR:

Meh, it's really fine imho. I think it really depends on developers having a proper time to do this in an engine/game from scratch, instead of it being just an addon. If you have an eye for subtlety, it's already pretty great.
Like I said, if you're happy with low FPS and somewhere around 60hz then more power to you. Developers aren't putting the money in, nor are they building a game from the ground around it is because less than 1% of gamers will take advanate of ray tracing. That's a super bad use of resources. Why do you think Nvidia sponsors every RT game? Because nobody wants to develop around it. You want us to put some bastardization of RT in? No problem. Pay me. Again like I said, it's till crap.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Daytona675:

Like I said, if you're happy with low FPS and somewhere around 60hz then more power to you. Developers aren't putting the money in, nor are they building a game from the ground around it is because less than 1% of gamers will take advanate of ray tracing. That's a super bad use of resources. Why do you think Nvidia sponsors every RT game? Because nobody wants to develop around it. You want us to put some bastardization of RT in? No problem. Pay me. Again like I said, it's till crap.
Already almost five million consoles out there have RT. This is happening either way.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Eventually we will not even bothering to touch raytracing .. like tnl and Tessellation (thank you tj 😛 ) but till then it will be a trade off , we have 5 years +++ before that change i bet.