Be Quiet! Pure Power 12 M - 850W ATX 3.0 PSU review
Corsair H170i Elite Capellix XT review
Forspoken: PC performance graphics benchmarks
ASRock Z790 Taichi review
The Callisto Protocol: PC graphics benchmarks
G.Skill TridentZ 5 RGB 6800 MHz CL34 DDR5 review
Be Quiet! Dark Power 13 - 1000W PSU Review
Palit GeForce RTX 4080 GamingPRO OC review
Core i9 13900K DDR5 7200 MHz (+memory scaling) review
Seasonic Prime Titanium TX-1300 (1300W PSU) review
Shaky Cam Video - Battlefield 5 Ray Tracing Demos from NVIDIA RTX Editors event
Over at NVIDIA editors day for GeForce RTX EA/DICE are presenting some stuff that isn't under embargo, so we can post about it. Below two shaky cam videos where you can hear the developers taking about using RTX in Battlefield 5.
« PALIT Shows New GeForce RTX JetStream and GameRock series photos · Shaky Cam Video - Battlefield 5 Ray Tracing Demos from NVIDIA RTX Editors event
· NVIDIA releases some RTX 2080 performance numbers and some info on DLSS »
Battlefield 3 Gulf of Oman and Karkand Shaky cam vids - 12/09/2011 05:06 AM
This is a shaky cam video recording of the Battlefied 3 Back to Karkand Expansion. Quality is horrible, but I don't think this trailer has been shown already. The recording was taken during an AMD pre...
Shaky Cam Modern Warfare 2 Video - 08/31/2009 10:04 AM
This Video was filmed at Game Crazy company convention and shows part of the gameplay of the history Mode. As always, don't watch either of these Modern Warfare 2 clips if you want your game-playing e...
PolishRenegade
Junior Member
Posts: 16
Joined: 2018-04-19
Junior Member
Posts: 16
Joined: 2018-04-19
#5576742 Posted on: 08/22/2018 07:38 PM
You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs.
Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
To be honest Skipping the 1st Gen would be better than spending twice the amount of performance and value for money 2nd gen might bring, Usually dont expect many good games with true Ray tracing to hit anytime soon as current consoles dont support it till next gen hits, so yeah i will be skipping for two reasons, one for money (still costly due to mining) worried about performance when true Ray tracing games hit the market which would not be good to sell the RTX 2070 or 2080 due to limited performance when 2nd gen true cards hit the market.
You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs.
Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
H83
Senior Member
Posts: 4499
Joined: 2009-09-08
Senior Member
Posts: 4499
Joined: 2009-09-08
#5576760 Posted on: 08/22/2018 08:05 PM
Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections.
Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
I was thinking the same. The RT effects are much better and more noticeable than i was expecting but at the same time the games looks much darker and worse without RT like half the lighting sources were off or something, weird stuff...
Just played BF5 RTX on a build released today, I played it at the NVIDIA event. Butter smooth perf at least at 1920x1080. It does look really good to be brutally honest. Time will tell of course, but I felt impressed.
So they showed the game at 1080p again. The fact Nvidia is always showing/highlighting games at 1080p seems to indicate the that enabling RT means a considerable performance hit. And when we consider that the buyers of this card are owners of 4K 144Hz screens who want/need all the performance available to run games at those settings there´s a good chance many will not use RT because of the performance hit. This smells like Physix at least for now... Hope i´m wrong.
Weird. At 1st I thought that Raytraced images show distortion caused by gun covering large potion of FoV as rays are traced from camera's origin. But then I realized that clean image has additional reflections.
Transparency, reflection and refraction effects are definitely much better with raytracing. But why it was so broken without it? That's not normal.
I was thinking the same. The RT effects are much better and more noticeable than i was expecting but at the same time the games looks much darker and worse without RT like half the lighting sources were off or something, weird stuff...
Just played BF5 RTX on a build released today, I played it at the NVIDIA event. Butter smooth perf at least at 1920x1080. It does look really good to be brutally honest. Time will tell of course, but I felt impressed.
So they showed the game at 1080p again. The fact Nvidia is always showing/highlighting games at 1080p seems to indicate the that enabling RT means a considerable performance hit. And when we consider that the buyers of this card are owners of 4K 144Hz screens who want/need all the performance available to run games at those settings there´s a good chance many will not use RT because of the performance hit. This smells like Physix at least for now... Hope i´m wrong.
Fox2232
Senior Member
Posts: 11808
Joined: 2012-07-20
Senior Member
Posts: 11808
Joined: 2012-07-20
#5576794 Posted on: 08/22/2018 08:48 PM
You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs.
Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
Actually number of required rays increases with task you give them. Have dry surfaces only and 2~3 rays per pixel may be enough with advanced clearing methods.
Add reflections and refractions and you need 6 for basic and 12 for good looking effect as you need another bounce from reflected/refracted surface.
That's still quite OK on 1080p as that's 24,88M rays per frame. And on 75fps that's 1866M rays per second. Then it is not mandatory to get raytraced information at full resolution. But on higher resolutions like 1440p/4K, one could possibly notice that raytraced IQ is not sufficient.
Secondly, raytracing can be done in old fashion from decades ago which delivers higher IQ anyway. Split scene on 8x8 blocks. Do your number of rays per that block.
Split each block to 4 4x4 blocks. Do raytracing for those blocks and include weight of 8x8 block at 25%.
Split 4x4 blocks to 2x2 blocks and repeat with 25% weight of higher size block.
Repeat till you are at 1x1 matrix.
And final pixel consists from information based on all those 4 levels of processing progressively giving lower weight to higher level. This method has 27.778% computational overhead. But has good advantage in having something at hand in time procedure computes that next more detailed pass would increase rendering time beyond desirable and data can be uses as is.
Basically final pixel is weight combination of each pass in following ratio:
1x1 matrix weight: 75%
2x2 matrix weight: 18.75%
4x4 matrix weight: 4.69%
8x8 matrix weight: 1.56%
And it takes about same percentage of computational time, so it is pretty fair method.
You have statistical information on time each pass takes. And can easily predict next pass time. You can dynamically adjust number of rays to deliver highest quality in available time per frame.
Let's say you run 8 rays per block. When you finished 2x2 matrix and know that no way in hell you can do full 1x1 matrix. But you can do 1/2 of 1x1 matrix in mesh (top left pixel of 2x2 and bottom right pixel of 2x2). This requires (1+4+16+32) * 8 = 424 rays per 8x8 block.
In next pass it runs only 5 rays per block and it requires (1+4+16+64) * 5 = 425 rays per 8x8 block and even 1x1 pass is complete. Or in heavy situations, you can go for more rays in certain blocks, but stop at 2x2 matrix.
Level of raytraced depth of each matrix can be determined by change threshold between upper and lower matrix.
Let's say you had certain color result on 4x4 matrix. Then 4 times 2x2 matrix were computed. And one of those 2x2 matrixes was within 2% of HSL from information in 4x4 matrix. Chances are that there is not much to be gained from calculating 1x1 there. While other 2x2 matrix shown 20% difference in HSL, good idea to calculate details there.
This is especially good idea if you go into 0,5x0,5 matrix (subpixel level). Because then edges of objects can use additional samples while benefit for something flat may be tiny.
You have to remember that this is the "brute-force" implementation of RTX with barely any optimization. They don't even cull the ray properly (as per the video). So what you can do now with how-ever many Gigarays (... makes be chuckle every time), will certainly be able to do with less rays in the future, especially with Deep-learning algorithms. That's why NVIDIA put emphasis on the drivers delivering significant changes to the way rendering is processes versus previous non-Deep Learning GPUs.
Obviously we'll see how it goes but that is some pretty darn exciting stuff to be sure!
Actually number of required rays increases with task you give them. Have dry surfaces only and 2~3 rays per pixel may be enough with advanced clearing methods.
Add reflections and refractions and you need 6 for basic and 12 for good looking effect as you need another bounce from reflected/refracted surface.
That's still quite OK on 1080p as that's 24,88M rays per frame. And on 75fps that's 1866M rays per second. Then it is not mandatory to get raytraced information at full resolution. But on higher resolutions like 1440p/4K, one could possibly notice that raytraced IQ is not sufficient.
Secondly, raytracing can be done in old fashion from decades ago which delivers higher IQ anyway. Split scene on 8x8 blocks. Do your number of rays per that block.
Split each block to 4 4x4 blocks. Do raytracing for those blocks and include weight of 8x8 block at 25%.
Split 4x4 blocks to 2x2 blocks and repeat with 25% weight of higher size block.
Repeat till you are at 1x1 matrix.
And final pixel consists from information based on all those 4 levels of processing progressively giving lower weight to higher level. This method has 27.778% computational overhead. But has good advantage in having something at hand in time procedure computes that next more detailed pass would increase rendering time beyond desirable and data can be uses as is.
Basically final pixel is weight combination of each pass in following ratio:
1x1 matrix weight: 75%
2x2 matrix weight: 18.75%
4x4 matrix weight: 4.69%
8x8 matrix weight: 1.56%
And it takes about same percentage of computational time, so it is pretty fair method.
You have statistical information on time each pass takes. And can easily predict next pass time. You can dynamically adjust number of rays to deliver highest quality in available time per frame.
Let's say you run 8 rays per block. When you finished 2x2 matrix and know that no way in hell you can do full 1x1 matrix. But you can do 1/2 of 1x1 matrix in mesh (top left pixel of 2x2 and bottom right pixel of 2x2). This requires (1+4+16+32) * 8 = 424 rays per 8x8 block.
In next pass it runs only 5 rays per block and it requires (1+4+16+64) * 5 = 425 rays per 8x8 block and even 1x1 pass is complete. Or in heavy situations, you can go for more rays in certain blocks, but stop at 2x2 matrix.
Level of raytraced depth of each matrix can be determined by change threshold between upper and lower matrix.
Let's say you had certain color result on 4x4 matrix. Then 4 times 2x2 matrix were computed. And one of those 2x2 matrixes was within 2% of HSL from information in 4x4 matrix. Chances are that there is not much to be gained from calculating 1x1 there. While other 2x2 matrix shown 20% difference in HSL, good idea to calculate details there.
This is especially good idea if you go into 0,5x0,5 matrix (subpixel level). Because then edges of objects can use additional samples while benefit for something flat may be tiny.
pSXAuthor
Junior Member
Posts: 10
Joined: 2011-06-04
Junior Member
Posts: 10
Joined: 2011-06-04
#5576885 Posted on: 08/22/2018 11:16 PM
So, I'll just explain something that is obvious to anyone who understands modern rendering tech but clearly not obvious to anyone who reads or writes for this site (and probably most people who watch those videos):
What they are showing is the difference between SSR (screen space reflections) and raytraced reflections. SSR works by approximating raytracing in screen space. For each pixel on a reflective surface a ray is fired out in and the camera's depth and colour buffers are used to perform an an approximation to raytracing (you can imagine drawing a line in 3D in the direction of the reflected light, reading the depth of each pixel until the line disappears behind another surface - nominally this will be the reflected point if the intersection point is close to the line - otherwise it is probably an occlusion - doing it efficiently and robustly is a little more involved than this but this explanation is very close to the reality).
This approximation can be quite accurate in simple cases however, because the camera's image is being used: no back facing surfaces can ever be reflected (you will never see your face in a puddle!), nothing off screen can be reflected, and geometry behind other geometry from the point of view of the camera can never be reflected. This last point explains why the gun in an FPS causes areas in the reflections to be hidden - the reason is very simple: those pixels should be reflecting something which is underneath the gun in the camera's 2D image. Also this explains why the explosion is not visible in the car door: it is out of shot. Actually: it also appears that the BF engine does not reflect the particle effects anyway (probably reflection is done earlier in the pipeline than the particles are drawn).
Using real raytracing for reflection avoids all of these issues (at the cost of doing full raytracing obviously
.
Honestly I think these tech presentations often do more harm than good because the people who watch them don't actually understand what they are seeing. Probably better to just show shiny stuff and say "look! its shinier now! buy our new hardware!"
So, I'll just explain something that is obvious to anyone who understands modern rendering tech but clearly not obvious to anyone who reads or writes for this site (and probably most people who watch those videos):
What they are showing is the difference between SSR (screen space reflections) and raytraced reflections. SSR works by approximating raytracing in screen space. For each pixel on a reflective surface a ray is fired out in and the camera's depth and colour buffers are used to perform an an approximation to raytracing (you can imagine drawing a line in 3D in the direction of the reflected light, reading the depth of each pixel until the line disappears behind another surface - nominally this will be the reflected point if the intersection point is close to the line - otherwise it is probably an occlusion - doing it efficiently and robustly is a little more involved than this but this explanation is very close to the reality).
This approximation can be quite accurate in simple cases however, because the camera's image is being used: no back facing surfaces can ever be reflected (you will never see your face in a puddle!), nothing off screen can be reflected, and geometry behind other geometry from the point of view of the camera can never be reflected. This last point explains why the gun in an FPS causes areas in the reflections to be hidden - the reason is very simple: those pixels should be reflecting something which is underneath the gun in the camera's 2D image. Also this explains why the explosion is not visible in the car door: it is out of shot. Actually: it also appears that the BF engine does not reflect the particle effects anyway (probably reflection is done earlier in the pipeline than the particles are drawn).
Using real raytracing for reflection avoids all of these issues (at the cost of doing full raytracing obviously

Honestly I think these tech presentations often do more harm than good because the people who watch them don't actually understand what they are seeing. Probably better to just show shiny stuff and say "look! its shinier now! buy our new hardware!"

Click here to post a comment for this news story on the message forum.
Senior Member
Posts: 422
Joined: 2008-09-25
To be honest Skipping the 1st Gen would be better than spending twice the amount of performance and value for money 2nd gen might bring, Usually dont expect many good games with true Ray tracing to hit anytime soon as current consoles dont support it till next gen hits, so yeah i will be skipping for two reasons, one for money (still costly due to mining) worried about performance when true Ray tracing games hit the market which would not be good to sell the RTX 2070 or 2080 due to limited performance when 2nd gen true cards hit the market.