Shaky Cam Video - Battlefield 5 Ray Tracing Demos from NVIDIA RTX Editors event

Published by

Click here to post a comment for Shaky Cam Video - Battlefield 5 Ray Tracing Demos from NVIDIA RTX Editors event on our message forum
data/avatar/default/avatar11.webp
Turn on the fckn FPS counter please. I sense NVIDIA is hiding two things. Potentional raw power of Turing. And fps drop with RT on.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Jespi:

Turn on the fckn FPS counter please. I sense NVIDIA is hiding two things. Potentional raw power of Turing. And fps drop with RT on.
How about being a little more patient, wait for the final game to be released as well as the reviews?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections. Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I must admit, that looks very nice. Unlike most demos of RTX where I'm like "this could easily be accomplished without an entire API" this actually has a noticeable and practical impact. Too bad it's an EA game though; I won't be buying it.
https://forums.guru3d.com/data/avatars/m/268/268759.jpg
Fox2232:

Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections. Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
new tech looks better if you ruin old techs, Nvidia always do that, they limit old tech potential to make new look better Greetings
https://forums.guru3d.com/data/avatars/m/219/219428.jpg
Fox2232:

Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections. Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
Because they require diffucult programming to get it to work properly. Most companies use their budget to do other stuff than raytracing.
data/avatar/default/avatar18.webp
Hilbert Hagedoorn:

How about being a little more patient, wait for the final game to be released as well as the reviews?
Newbies are allowed to be rude. Because they don't care. That's how they roll 😀
Fox2232:

Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections. Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
How is it broken? The guy from DICE has said that non-RTX game is benefiting from all the most recent technologies.
https://forums.guru3d.com/data/avatars/m/227/227986.jpg
Ehh... Battlefield 5. I will sadly skip. I tried to get into BF1 so many times now. I keep reinstalling, but sadly the pace of the game is too fast for me. Run, gun, die... run, gun, die! BFBC2 got it right 🙁
data/avatar/default/avatar33.webp
Amx85:

new tech looks better if you ruin old techs, Nvidia always do that, they limit old tech potential to make new look better Greetings
That's exactly what they did on releasing 9xx series and adding new "aa method" called MFAA. For some reason once they introduced driver with this method on forums people started complaining about more aliasing in games overall. Basically definition of MFAA is to enhance effects of MSAA. Coincidence? More like nvidia intentionally broke image quality and "fixed" it with artificial aa method MFAA.
data/avatar/default/avatar02.webp
GlennB:

Because they require diffucult programming to get it to work properly. Most companies use their budget to do other stuff than raytracing.
Video clearly shows how much effort they put in graphics without raytracing. When he is moving gun above water at 2:50 makes it obvious. And something tells me that AA is lower with raytracing on, because they do not want to show all of that ghosting from AA (I thought frostbyte had better AA implementation then UE, I guess I was wrong). While I am sure raytracing will increase visual quality, this is not a good way to show improvements...
https://forums.guru3d.com/data/avatars/m/261/261501.jpg
Yes, I get it, it looks nice. ...But I'm sure as hell not paying $1100 for it.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Noisiv:

How is it broken? The guy from DICE has said that non-RTX game is benefiting from all the most recent technologies.
Haven't you noticed that reflection on watery surfaces (ground) had some kind of blur in areas below gun magazine. Right at vertical line where magazine ends, reflections are just fine. And inner circle of iron sights was blurry too?
GlennB:

Because they require diffucult programming to get it to work properly. Most companies use their budget to do other stuff than raytracing.
Actually that proper rendering was raytraced, standard rendering had bugs.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Just played BF5 RTX on a build released today, I played it at the NVIDIA event. Butter smooth perf at least at 1920x1080. It does look really good to be brutally honest. Time will tell of course, but I felt impressed.
https://forums.guru3d.com/data/avatars/m/266/266438.jpg
I came here for some shaky cam videos but they were perfectly fine. Disappointed. 🙁 Now I am anticipating some vertical videos of a big screen. 😛
data/avatar/default/avatar05.webp
As a game dev... RTX is absolutely great news. Slap some PBR shaders, put on an area light and vlam, 90% of the job is done. This is absolutely incredible and will revolutionize the industry in the next 5 years. Games that required teams of 200 artists will now only need half. No longer will hundreds of dev-hours be spent adjusting the light here, or removing a reflection there to increase performance or changing the level design to cheat a "mood" or making a specific shader for faking AO... imagine that! You have to understand, that to accomplish the level of detail ray tracing gives you but with traditional techniques you will need 5x the GPU power. So at one point, we will reach an inflexion point where it's more efficient (& less costly) to have ray tracing vs using the old methods for an equivalent rendering quality. Yeah, NVIDIA really rocked it. Your normal gamer might not have realized it yet but this is really groundbreaking stuff.
https://forums.guru3d.com/data/avatars/m/200/200386.jpg
To be honest Skipping the 1st Gen would be better than spending twice the amount of performance and value for money 2nd gen might bring, Usually dont expect many good games with true Ray tracing to hit anytime soon as current consoles dont support it till next gen hits, so yeah i will be skipping for two reasons, one for money (still costly due to mining) worried about performance when true Ray tracing games hit the market which would not be good to sell the RTX 2070 or 2080 due to limited performance when 2nd gen true cards hit the market.
data/avatar/default/avatar27.webp
cliffgamerz:

To be honest Skipping the 1st Gen would be better than spending twice the amount of performance and value for money 2nd gen might bring, Usually dont expect many good games with true Ray tracing to hit anytime soon as current consoles dont support it till next gen hits, so yeah i will be skipping for two reasons, one for money (still costly due to mining) worried about performance when true Ray tracing games hit the market which would not be good to sell the RTX 2070 or 2080 due to limited performance when 2nd gen true cards hit the market.
You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs. Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Fox2232:

Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections. Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
I was thinking the same. The RT effects are much better and more noticeable than i was expecting but at the same time the games looks much darker and worse without RT like half the lighting sources were off or something, weird stuff...
Hilbert Hagedoorn:

Just played BF5 RTX on a build released today, I played it at the NVIDIA event. Butter smooth perf at least at 1920x1080. It does look really good to be brutally honest. Time will tell of course, but I felt impressed.
So they showed the game at 1080p again. The fact Nvidia is always showing/highlighting games at 1080p seems to indicate the that enabling RT means a considerable performance hit. And when we consider that the buyers of this card are owners of 4K 144Hz screens who want/need all the performance available to run games at those settings there´s a good chance many will not use RT because of the performance hit. This smells like Physix at least for now... Hope i´m wrong.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
PolishRenegade:

You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs. Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
Actually number of required rays increases with task you give them. Have dry surfaces only and 2~3 rays per pixel may be enough with advanced clearing methods. Add reflections and refractions and you need 6 for basic and 12 for good looking effect as you need another bounce from reflected/refracted surface. That's still quite OK on 1080p as that's 24,88M rays per frame. And on 75fps that's 1866M rays per second. Then it is not mandatory to get raytraced information at full resolution. But on higher resolutions like 1440p/4K, one could possibly notice that raytraced IQ is not sufficient. Secondly, raytracing can be done in old fashion from decades ago which delivers higher IQ anyway. Split scene on 8x8 blocks. Do your number of rays per that block. Split each block to 4 4x4 blocks. Do raytracing for those blocks and include weight of 8x8 block at 25%. Split 4x4 blocks to 2x2 blocks and repeat with 25% weight of higher size block. Repeat till you are at 1x1 matrix. And final pixel consists from information based on all those 4 levels of processing progressively giving lower weight to higher level. This method has 27.778% computational overhead. But has good advantage in having something at hand in time procedure computes that next more detailed pass would increase rendering time beyond desirable and data can be uses as is. Basically final pixel is weight combination of each pass in following ratio: 1x1 matrix weight: 75% 2x2 matrix weight: 18.75% 4x4 matrix weight: 4.69% 8x8 matrix weight: 1.56% And it takes about same percentage of computational time, so it is pretty fair method. You have statistical information on time each pass takes. And can easily predict next pass time. You can dynamically adjust number of rays to deliver highest quality in available time per frame. Let's say you run 8 rays per block. When you finished 2x2 matrix and know that no way in hell you can do full 1x1 matrix. But you can do 1/2 of 1x1 matrix in mesh (top left pixel of 2x2 and bottom right pixel of 2x2). This requires (1+4+16+32) * 8 = 424 rays per 8x8 block. In next pass it runs only 5 rays per block and it requires (1+4+16+64) * 5 = 425 rays per 8x8 block and even 1x1 pass is complete. Or in heavy situations, you can go for more rays in certain blocks, but stop at 2x2 matrix. Level of raytraced depth of each matrix can be determined by change threshold between upper and lower matrix. Let's say you had certain color result on 4x4 matrix. Then 4 times 2x2 matrix were computed. And one of those 2x2 matrixes was within 2% of HSL from information in 4x4 matrix. Chances are that there is not much to be gained from calculating 1x1 there. While other 2x2 matrix shown 20% difference in HSL, good idea to calculate details there. This is especially good idea if you go into 0,5x0,5 matrix (subpixel level). Because then edges of objects can use additional samples while benefit for something flat may be tiny.
data/avatar/default/avatar04.webp
So, I'll just explain something that is obvious to anyone who understands modern rendering tech but clearly not obvious to anyone who reads or writes for this site (and probably most people who watch those videos): What they are showing is the difference between SSR (screen space reflections) and raytraced reflections. SSR works by approximating raytracing in screen space. For each pixel on a reflective surface a ray is fired out in and the camera's depth and colour buffers are used to perform an an approximation to raytracing (you can imagine drawing a line in 3D in the direction of the reflected light, reading the depth of each pixel until the line disappears behind another surface - nominally this will be the reflected point if the intersection point is close to the line - otherwise it is probably an occlusion - doing it efficiently and robustly is a little more involved than this but this explanation is very close to the reality). This approximation can be quite accurate in simple cases however, because the camera's image is being used: no back facing surfaces can ever be reflected (you will never see your face in a puddle!), nothing off screen can be reflected, and geometry behind other geometry from the point of view of the camera can never be reflected. This last point explains why the gun in an FPS causes areas in the reflections to be hidden - the reason is very simple: those pixels should be reflecting something which is underneath the gun in the camera's 2D image. Also this explains why the explosion is not visible in the car door: it is out of shot. Actually: it also appears that the BF engine does not reflect the particle effects anyway (probably reflection is done earlier in the pipeline than the particles are drawn). Using real raytracing for reflection avoids all of these issues (at the cost of doing full raytracing obviously 🙂. Honestly I think these tech presentations often do more harm than good because the people who watch them don't actually understand what they are seeing. Probably better to just show shiny stuff and say "look! its shinier now! buy our new hardware!" 🙂