Video: Crytek CryEngine shows raytracing technology demo (runs on Radeon RX Vega 56)
Click here to post a comment for Video: Crytek CryEngine shows raytracing technology demo (runs on Radeon RX Vega 56) on our message forum
D3M1G0D
That's pretty impressive. Can't wait to see what they do with it.
Man, seeing that video makes me want to play Crysis 3 again.
alanm
Looks even better than Nvidias RTX in Metro and BFV if you ask me.
airbud7
mohiuddin
Looks like a city from cyberpunk 2077??
Only if we could get our hands on this demo 🙁
Caesar
Wish it was downloadable.
pharma
There should be more information coming out next week at GDC 2019. Shows promise but will be interesting once more details are know.
https://abload.de/img/edge2arkm3.png
https://abload.de/img/edgescrjrv.png
XenthorX
fascinating
Kaarme
Jensen 'Leather Jacket' Huang should have used something like this in his RTX On/Off demos, only with a visible fps counter. He would have seemed like a professional that way.
siriq
The demo is running at 4K 30 fps. Not bad.
Maybe an RX 580 at 1080p ?? Hmm?? Good enough?
mohiuddin
siriq
Turbogear
It would be nice if Crytek releases this for us to test on Radeon VII and Vega cards.
If this real works well on AMD cards compared RTX then we will have the V-Sync/Freesync sort of discusion all over again. 🙄
siriq
TLD LARS
It would be interesting to see if Nvidia is able to funnel this raytracing workload into the more simpel RTX Raytracing cores, because this could be a different method then Nvidia uses with RTX cards, because it is not locked to RTX.
siriq
Undying
GREGIX
Nice. I'm even more now aching to buy v7 just to not support ngreedia;)
siriq
coth
GamerNerves
Basically RT-cores can help raytracing performance while the main unit accelerates everything else, but if you just have enough memory bandwith, I assume you can do the same but with less hassle. I'm pretty sure that this time AMD's approach to raytracing, which we will see in practice in the near future, is easier to work with while RT-cores need specific tuning. This also means that Radeon cards cannot utilize the function then at all.
I'm sure Nvidia helps developers with the RT-cores for sure, but I would really lean on AMD's solution. It's true that wide memory bandwith requires power, but it's just simpler all the way and has other benefits as well, while Nvidia currently utilizes advanced techniques to save power. Indeed yes, RT-cores and other Nvidia specific features are there for one reason: to save power.
The only thing I don't like about the presentation of Tensor cores is that they were made to sound something that is absolutely required for raytracing, when this is far from the truth and it is just a different approach that requires more work from the developers. In the end, once again, everything comes down to what approach developers are eager to implement in. Nvidia often gives good support to it's partners, which can lead to more games utilizing Tensor cores, but who really knows.
PS. For the reason that AMD is often not keen on using workarounds in their GPUs, do not expect Navi to be more power efficient than current GeForces, but rather them having more performance with the same power consumption as RX 580 or something around that. I bet the most powerful Navi GPU will use something close to Vega 56 level of power. What everybody wants to see from Navi are better fan noise levels which should be tuned by the third party partners. Also ridiculous size of some Radeon cards could be avoided if cooling would be designed better.