Crytek releases Neon Noir Ray Tracing Benchmark

Published by

Click here to post a comment for Crytek releases Neon Noir Ray Tracing Benchmark on our message forum
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
CPC_RedDawn:

And Nvidia a year ago was saying ray tracing couldn't be done on current API's, with current hardware....... Crytek.... "Hold my beer!" Seriously, this is what should of been pushed first BEFORE Nvidia's RTX stuff, which the industry wasn't ready for. Just imagine if they pushed something like this, and gave us a monster compute driven GPU with an insane amount of shader cores instead of cutting the chip in half and cramming in RT and Tensor cores which took up space for those compute units. Demo's like this would be in real games and running extremely well, not to mention everyone gets to try it out and see just how good it looks and it even performs very well on older hardware. Which would of then in turn encouraged people to upgrade their older hardware to newer more capable hardware. Then Nvidia could of used the money gained here to further develop RTX and actually release it in a more refined state. With more than likely cheaper hardware as well!
They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone. Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
......shadow details in the demo is very limited....many object does not cast shadows.... --------------------------------- There's a big difference compared to this one: [youtube=pNmhJx8yPLk]
data/avatar/default/avatar37.webp
Denial:

They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone. Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
I think we are missing each other's point here. I agree about TDP and I agree about staying within those limits. But we also have to agree that in RTX games with both Tensor and Shaders active and the cards stay within TDP. Now the actual economic theory is, the whole idea is to get ROI from R&D. nVidia put in billions into development of compute tensor cores for AI and deep learning industry. However, it is near impossible to recover all costs and make profit from a growing industry in a short time, so they had to find a way to sell it into an established industry (such as gaming), hence RTX API was built to leverage tensor cores for gaming. What we are discussing is that we don't need tensor for Real time Ray Tracing, as we were led to believe. All we need is RPM, True Async Compute (not just pre-empt), and more shaders to execute it. Sad to say, I never bought into their RTX implementation, I bought 980ti launch day, 1080ti launch day, but not 2080ti. RTX seems to be something that will become GSync of raytracing. Soon, there will be 'RTX compatible'.
data/avatar/default/avatar32.webp
I'm so suprised this runs perfect @1920x1080 on my Ngreedia 1660GTX ti Non-OC constant 50+ towards 70ish on Ultra 5845
data/avatar/default/avatar24.webp
Windows 10 pro 64-bit November 1909 9900K @ 5.2all 16GB DDR4 4000 @ c18 GTX 1080 1tB NVME ultra 1920x1080: 7455 60-102 FPS runs pretty slick
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I think this thing is problematic not because Crytek built it without DXR (and introducing sort of a second standard), but because, as far as I've understood, every implementation of DXR by now, and "ray tracing" such as this doesn't do the same... If I understood correctly, some are just using it for reflections, sometimes it's used for shadows, other times for global illumination... I have the impression, if they'd all use the same, every card on the market right now would choke and throw up badly. That said, a lot of things called ray tracing actually don't even talk about the same stuff... or am I off here?
https://forums.guru3d.com/data/avatars/m/222/222136.jpg
You l0st me at "Crytek Launcher". o_O
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
angelgraves13:

Yeah besides a few games...I don’t see anything in the horizon. Prey is probably my favorite CryEngine game.
Sniper Ghost Warrior Contracts is coming out this month, though it's not listed on the wikipedia page i believe, and not everyone is into that game. But there are many games that have not been released yet, though i believe many of them are probably effectively canceled. Either way my point was it's been more popular than i think people give it credit for. And yes, i agree, prey was a very good game and not horrible in performance on cryengine, i wish there were more like it.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Denial:

They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone. Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
Excuse me for maybe being naive, but this logic makes no sense. If an RTX2080Ti uses 300W TDP for just the CUDA cores (CU's) and not the RT/Tensor cores then with the RT/Tensor cores being used wouldn't they then be exceeding the 300W TDP limit anyway? This makes no sense, removing the RT/Tensor cores would of freed up die space allowing for more CU's.... sure they might have hit TDP limits but this is when optimisation comes in and clock speeds are reduced in order to meet said TDP limit. Having a better more efficient architecture, coupled with much faster GDDR6 memory, and more CU's even at a lower clock speed would of led to a much faster GPU for raw compute power. Heck, if TDP is really the reason why, then instead of pouring money into a technology (RTX) that simply isn't ready for the mass market why not focus their resources on a node shrink and move from 12nmFF to 10 or 7nm. It's like someone else mentioned, they poured so much rnd into tensor cores for the A.I and automotive industries that they are struggling to make a profit as those markets are still relatively new, so they needed a way to please investors and shareholders by using them for a new gimmick technology. Selling it to gamers as the next big thing, when in reality it IS the next big thing but not for at least another 3-5 years. This tech in the video should of been the first stepping stone, instead Nvidia treated it as a race and not a marathon.
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
angelgraves13:

It was a great engine at one time, but seems abandoned. Hell, CryTek might go bankrupt any second now...so what's the point in using the engine?
They can't go bankrupt, they are owned by EA, and EA is doing fine. EA could for sure close them down (as they are known to do), but if they are putting out projects like this that should tell you EA still sees a point in Crytek and their engine. EA loves their Frostbite engine, but I don't think EA likes the idea of it being public. Cryengine remains as EA's "Unity competition"
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
CPC_RedDawn:

Excuse me for maybe being naive, but this logic makes no sense. If an RTX2080Ti uses 300W TDP for just the CUDA cores (CU's) and not the RT/Tensor cores then with the RT/Tensor cores being used wouldn't they then be exceeding the 300W TDP limit anyway? This makes no sense, removing the RT/Tensor cores would of freed up die space allowing for more CU's.... sure they might have hit TDP limits but this is when optimisation comes in and clock speeds are reduced in order to meet said TDP limit. Having a better more efficient architecture, coupled with much faster GDDR6 memory, and more CU's even at a lower clock speed would of led to a much faster GPU for raw compute power. Heck, if TDP is really the reason why, then instead of pouring money into a technology (RTX) that simply isn't ready for the mass market why not focus their resources on a node shrink and move from 12nmFF to 10 or 7nm. It's like someone else mentioned, they poured so much rnd into tensor cores for the A.I and automotive industries that they are struggling to make a profit as those markets are still relatively new, so they needed a way to please investors and shareholders by using them for a new gimmick technology. Selling it to gamers as the next big thing, when in reality it IS the next big thing but not for at least another 3-5 years. This tech in the video should of been the first stepping stone, instead Nvidia treated it as a race and not a marathon.
When the RT cores are in use, CUDA cores draw a lot less power. This is because the RT cores performance is weak enough to bottleneck the CUDA cores. They are node shrinking with their new GPU's likely due next year. Likely 7nm. What the node size is doesn't really matter if the performance is there. Turing doesn't struggle. I don't see an issue with pushing for new tech before it's prime-time. PC has always been the place to see the future before it's ready. AMD did it with Mantle.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
the RT cores are not the bottleneck.
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
Astyanax:

the RT cores are not the bottleneck.
If that were true then enabling RT wouldn't effect performance. I guess I can't say whether the cores themselves are the root of it, but the CUDA cores are being bottlenecked with RT turned on. Since we can see what performance looks like without RT we know the baseline of what the CUDA cores could do. If anything, with RT off, CUDA has a harder time thanks to the addition of traditional reflections/shadows/GI. Yet since performance goes down with RT, we have what's called a bottleneck.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Cyberdyne:

If that were true then enabling RT wouldn't effect performance
RayTracing is not done how you think its done, the RT cores are for sorting rays (is this on screen, or isn't it onscreen) which passes results onto the next stage of rendering which is done in the traditional way. https://images.anandtech.com/doci/13282/GeForce_EditorsDay_Aug2018_Updated090318_1536034900-compressed-031_575px.png The shadow, or reflection, or lighting you end up with gets done by the traditional shading pipe. You can sort faster but its the traditional shaders that are contention. https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Cyberdyne:

When the RT cores are in use, CUDA cores draw a lot less power. This is because the RT cores performance is weak enough to bottleneck the CUDA cores. They are node shrinking with their new GPU's likely due next year. Likely 7nm. What the node size is doesn't really matter if the performance is there. Turing doesn't struggle. I don't see an issue with pushing for new tech before it's prime-time. PC has always been the place to see the future before it's ready. AMD did it with Mantle.
This makes more sense to me, if this is the case then Turing was an even bigger mistake. Why on earth put RT/Tensor cores onto your chip that ends up starving your CUDA cores and in turn massively decreasing performance. On the one hand they were touting Turing as being a 4K60fps monster, then on the other hand they were pushing RTX which then tanks performance and the cards become 1080p cards.... More proof the tech is not ready. Its like bugatti making a new car that can do 1000mph.... but when it does it explodes..... Also Mantle was waaaaay more ready than RTX. I had a HD7970GHz GPU when Mantle came out and in BF4 my performance shot up and I was able to max the game out at Ultra settings 1080p and gained about -/+30% better performance with MUCH higher minimum frame rates. Mantle eventually pushed the industry to adopt low level API's after years and years of massive overheads, which eventually became Vulkun and microsoft followed suite with DX12. Not to mention AMD basically gave their Mantle code away to the OpenGL team, can you see Nvidia doing the same with their tech? Unless it becomes unprofitable they will never give it away to benefit the whole industry just their wallets. All RTX has done is show how far behind we are with the hardware needed to run this properly.
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
CPC_RedDawn:

This makes more sense to me, if this is the case then Turing was an even bigger mistake. Why on earth put RT/Tensor cores onto your chip that ends up starving your CUDA cores and in turn massively decreasing performance. On the one hand they were touting Turing as being a 4K60fps monster, then on the other hand they were pushing RTX which then tanks performance and the cards become 1080p cards.... More proof the tech is not ready. Its like bugatti making a new car that can do 1000mph.... but when it does it explodes..... Also Mantle was waaaaay more ready than RTX. I had a HD7970GHz GPU when Mantle came out and in BF4 my performance shot up and I was able to max the game out at Ultra settings 1080p and gained about -/+30% better performance with MUCH higher minimum frame rates. Mantle eventually pushed the industry to adopt low level API's after years and years of massive overheads, which eventually became Vulkun and microsoft followed suite with DX12. Not to mention AMD basically gave their Mantle code away to the OpenGL team, can you see Nvidia doing the same with their tech? Unless it becomes unprofitable they will never give it away to benefit the whole industry just their wallets. All RTX has done is show how far behind we are with the hardware needed to run this properly.
Turing is both of those things. Just not at the same time. I never felt mislead by their marketing, I certainly was not expecting the 2080 Ti to do 4k60fps with RT. But Turing is the best at 4k, and it's the best at RT. Real time raytracing has to start somewhere, and NV is willing to invest. Mantle worked out the gate, RT also works out the gate, "it just works!" lol. Mantle was focused on more FPS, RT never made such claims. RT offers raytracing in real time, and it does that. RT is also not proprietary. When AMD supports RT, these current RTX games will work on AMD GPUs out of the box. That's been the case since RTX was a thing, can't say the same thing about Mantle.
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
Astyanax:

RayTracing is not done how you think its done, the RT cores are for sorting rays (is this on screen, or isn't it onscreen) which passes results onto the next stage of rendering which is done in the traditional way. https://images.anandtech.com/doci/13282/GeForce_EditorsDay_Aug2018_Updated090318_1536034900-compressed-031_575px.png The shadow, or reflection, or lighting you end up with gets done by the traditional shading pipe. You can sort faster but its the traditional shaders that are contention. https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
Idk, that depends on whether that sorting is fast enough right? There's a reason they increase the amount of RT cores per GPU, clearly they are in contention as well.
data/avatar/default/avatar02.webp
This is more proof the 1080TI was a DX11 powerhouse, but in DX12 it falls short