Unreal Engine 4 - (2018) Realistic Looking Characters

Published by

Click here to post a comment for Unreal Engine 4 - (2018) Realistic Looking Characters on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
So - is this with or without RTX? Because if this is without, it really goes to show how much that technology is over-hyped and unnecessary.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

So - is this with or without RTX? Because if this is without, it really goes to show how much that technology is over-hyped and unnecessary.
I'm not sure what you mean? RTX isn't a graphical effect, it's a hardware accelerated denoiser for raytracing. In this particular video I know the Star Wars scene is raytraced on RTX, but the only benefit RTX provides is a performance one, not visual - the visual aspect comes from raytracing itself. RTX just allows a raytraced scene to be completed with a minimal number of rays. The developer has control over the number of rays they want casted along with what specific effects they want raytraced. For example, in that Star Wars demo Epic said they were using an extreme number of rays and the entire scene was raytraced - which is why it required a DGX-1 supercomputer to run it in real time. In the upcoming metro game, only the AO/GI are raytraced and the ray count will probably significantly lower, impacting the overall accuracy but allowing it to be completed on a single GPU in real time.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
10 more years and we'll have true photo realistic games.
https://forums.guru3d.com/data/avatars/m/257/257142.jpg
^ To be fair, we all said that same thing 10 years ago, and yet here we are saying it again. I'm guessing for actual affordable, mainstream, fully real-time "photo-realistic" graphics (and that's focusing solely on traditional 3D gaming on a 2D screen; barring whatever changes VR or yet to be known technology does or doesn't bring), we're probably looking at closer to 20-25 years. Time has the final say though, so until then...
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
The skin continues to be too perfect and too shiny so i think there´s still a long way before we reach photo-realistic" graphics...
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
schmidtbag:

So - is this with or without RTX? Because if this is without, it really goes to show how much that technology is over-hyped and unnecessary.
Imho, it's necessary to get the most performance out of the hardware. Everyone is working together on this goal. AMD will also have their equivalent solution, will you also say that their solution is unnecessary? The cost of the hardware to do most of the demos shown is completely out of the range of consumers. However, for studios, then, these systems are very cost effective, especially as more tools can now be used in real-time. It means that whole rooms dedicated to rendering can now be reduced in not only size, but, also power-consumption. Everything from editing to the time it takes to produce films is affected as this technology matures and becomes more efficient. We need to see this kind of progress years before we ever see this tech become deployed to consumers. I really don't see what the problem is. You won't be seeing these demos run real-time on your PC anytime soon. The best case scenario is Metro, which is nowhere near the same quality as those demos, but, it'll be a positive first step for consumers to get their hands on and use a cut-down version of this technology.
data/avatar/default/avatar29.webp
H83:

The skin continues to be too perfect and too shiny so i think there´s still a long way before we reach photo-realistic" graphics...
Nah, that is just FXAA removing all irregularities skin would naturally have because UE does not have native support for modern AA methods.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Looks good. And still waiting... -_-
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Stormyandcold:

Imho, it's necessary to get the most performance out of the hardware.
That's assuming there will actually be a performance improvement. It's not a whole lot different than PhysX: it was more efficient than running particle calculations on a CPU, but you're still adding intensive computations.
Everyone is working together on this goal. AMD will also have their equivalent solution, will you also say that their solution is unnecessary?
If their solution is proprietary or will effectively only be used by them, then yes, I will say it's unnecessary. AMD is not immune to my criticisms. I felt TressFX, for example, to be a stupid idea too, despite it being potentially useful.
The cost of the hardware to do most of the demos shown is completely out of the range of consumers. However, for studios, then, these systems are very cost effective, especially as more tools can now be used in real-time. It means that whole rooms dedicated to rendering can now be reduced in not only size, but, also power-consumption. Everything from editing to the time it takes to produce films is affected as this technology matures and becomes more efficient.
In essence, I agree with what you're saying, but as pointed out by others, UE4 isn't quite good enough to replace the slower renderers you speak of.
We need to see this kind of progress years before we ever see this tech become deployed to consumers. I really don't see what the problem is. You won't be seeing these demos run real-time on your PC anytime soon. The best case scenario is Metro, which is nowhere near the same quality as those demos, but, it'll be a positive first step for consumers to get their hands on and use a cut-down version of this technology.
My gripe is how this is proprietary technology for a very minimal improvement. If it weren't proprietary, I'd still question the true value of it, but I wouldn't care about its existence.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

That's assuming there will actually be a performance improvement. It's not a whole lot different than PhysX: it was more efficient than running particle calculations on a CPU, but you're still adding intensive computations. If their solution is proprietary or will effectively only be used by them, then yes, I will say it's unnecessary. AMD is not immune to my criticisms. I felt TressFX, for example, to be a stupid idea too, despite it being potentially useful. In essence, I agree with what you're saying, but as pointed out by others, UE4 isn't quite good enough to replace the slower renderers you speak of. My gripe is how this is proprietary technology for a very minimal improvement. If it weren't proprietary, I'd still question the true value of it, but I wouldn't care about its existence.
I wouldn't call it a minimal improvement, raytracing is essentially the end game for lighting physics. Real time reflections, caustics, GI, etc - it's all way easier to implement with raytracing then trying to fake it with rasterization techniques and as long as you sample enough rays per pixel + bounce it's essentially 100% accurate to the way the light would behave in real life. The ray tracing itself is a part of DX12 now, All RTX does is speed up that rendering process by allowing them to use less rays by denoising the image.. that's it and it also works across every OptiX supported application - so Arnold Renderer, VRay, Optis, Pixar's Renderman, SW Visualize, all of the OptiX supported applications automatically get sped up by RTX - it's a 3x performance increase on Volta for any given quality level. So even if it doesn't make some applications real time, it's a massive performance improvement. We're at a point now where there are going to be diminishing returns on the level of quality and performance. Rasterization can only take you so far and honestly if you watch the GDC unreal demos for volumetric lightmaps/fog/capsule shadows/etc - it's pretty clear that developers are hitting a level where the amount of manual work you have to put in to fake all these effects in real time is too costly. There are so many exceptions to all those "workarounds" for performance, certain objects that need to be manually tuned for soft shadows, certain effects like the light capsules that get buggy around other objects, light maps for AO that work in one scene but not another, etc. It's a giant mess that's all mostly fixed entirely by raytracing + adds additional accuracy and now thanks to denoising techniques like RTX can be done in real time.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

I wouldn't call it a minimal improvement, raytracing is essentially the end game for lighting physics. Real time reflections, caustics, GI, etc - it's all way easier to implement with raytracing then trying to fake it with rasterization techniques and as long as you sample enough rays per pixel + bounce it's essentially 100% accurate to the way the light would behave in real life. The ray tracing itself is a part of DX12 now, All RTX does is speed up that rendering process by allowing them to use less rays by denoising the image.. that's it and it also works across every OptiX supported application - so Arnold Renderer, VRay, Optis, Pixar's Renderman, SW Visualize, all of the OptiX supported applications automatically get sped up by RTX - it's a 3x performance increase on Volta for any given quality level. So even if it doesn't make some applications real time, it's a massive performance improvement.
I'm well aware how important raytracing is. I have absolutely nothing against the feature in of itself. But so far, the effects of RTX do not show anything especially compelling that live renderers currently available couldn't already accomplish. I am inclined to believe RTX does in fact have improvements, but, I still feel the difference it makes is too minimal to warrant a proprietary technology.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

I'm well aware how important raytracing is. I have absolutely nothing against the feature in of itself. But so far, the effects of RTX do not show anything especially compelling that live renderers currently available couldn't already accomplish. I am inclined to believe RTX does in fact have improvements, but, I still feel the difference it makes is too minimal to warrant a proprietary technology.
Idk, I still feel like you don't grasp what RTX is - like you say raytracing is important but then you say RTX does not show anything compelling.. RTX just accelerates raytracing through hardware, that's it. That's all it does. Like when AMD added a primitive discard engine to it's architecture no one said "that's proprietary" and "it doesn't show anything compelling that we couldn't already do" - this is no different then that. Unreal for example will add DX12 raytracing in it's engine in 4.20 - engine goes "hey dx12, shoot some rays out of this light, make it 50,000" dx12 goes "yo unreal, we running on volta RTX bro, we only need 15,000 rays for the same quality" engine goes "free performance? thanks mate" You can invoke it through DX12/Optix and Soon Vulkan. You don't need to use their GameWorks library, that's just preconfigured lighting system - DX12 has a standardized raytracing system now and RTX can accelerate that system. It's no different than Nvidia accelerating tessellation in hardware, or geometry culling, or whatever - it's just an intrinsic part of the hardware - like no one calls those systems "proprietary" it literally is the hardware. AMD will have its own raytracing acceleration in hardware. There is no way to make these not proprietary, unless Microsoft/Khronos or whoever starts telling Nvidia/AMD how to build their hardware or these companies just start freeling opening the designs of their hardware up.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
@Denial Before I point out the potential flaws in some of the things you said, I guess I need something cleared up before I look like [more of?] an idiot: I got the impression that the raytracing system in DX12 depended on RTX, where developers have to go out of their way to intentionally implement RTX. The way you phrase a lot of what you said suggests RTX instead complements DX12, and functions automatically wherever it is available. In other words, you still get the benefits of DX12's raytracing regardless of whether you have RTX or not, but if you do have it, you get a performance boost due to the hardware-specific optimization; RTX is not something devs have to intentionally opt into. Is this a correct assessment? If so, then I have no problem at all with RTX or its proprietary nature (and in turn, I see no flaws in what you said).
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

@Denial Before I point out the potential flaws in some of the things you said, I guess I need something cleared up before I look like [more of?] an idiot: I got the impression that the raytracing system in DX12 depended on RTX, where developers have to go out of their way to intentionally implement RTX. The way you phrase a lot of what you said suggests what is actually happening is RTX complements DX12, and functions automatically wherever it is available. In other words, you still get the benefits of DX12's raytracing regardless of whether you have RTX or not, but if you do have it, you get a performance boost due to the hardware-specific optimization; RTX is not something devs have to intentionally opt into. Is this a correct assessment? If so, then I have no problem at all with RTX or its proprietary nature.
Yes that is correct. You can utilize DX12's raytracing (DXR) without RTX. DXR will actually work on all current generation hardware, it's just accelerated by RTX when it's detected and similarly AMD's implementation. It also seems like not only can hardware vendors accelerate this but it's possible for software developers to come up with their own CPU/DirectCompute based denoising algorithms and accelerate it as well. Here is some relevant sections from Anandtech's breakdown, I bolded the parts I find important:
DirectX Raytracing then is Microsoft laying the groundwork to make this practical by creating an API for ray tracing that works with the company’s existing rasterization APIs. Technically speaking GPUs are already generic enough that today developers could implement a form of ray tracing just through shaders, however doing so would miss out on the opportunity to tap into specialized GPU hardware units to help with the task, not to mention the entire process being non-standard. So both to expose new hardware capabilities and abstract some of the optimization work around this process to GPU vendors, instead this functionality is being implemented through new API commands for DirectX 12. But like Microsoft’s other DirectX APIs it’s important to note that the company isn’t defining how the hardware should work, only that the hardware needs to support certain features. Past that, it’s up to the individual hardware vendors to create their own backends for executing DXR commands. As a result – and especially as this is so early – everyone from Microsoft to hardware vendors are being intentionally vague about how hardware acceleration is going to work. At the base level, DXR will have a full fallback layer for working on existing DirectX 12 hardware. As Microsoft’s announcement is aimed at software developers, they’re pitching the fallback layer as a way for developers to get started today on using DXR. It’s not the fastest option, but it lets developers immediately try out the API and begin writing software to take advantage of it while everyone waits for newer hardware to become more prevalent. However the fallback layer is not limited to just developers – it’s also a catch-all to ensure that all DirectX 12 hardware can support ray tracing – and talking with hardware developers it sounds like some game studios may try to include DXR-driven effects as soon as late this year, if only as an early technical showcase to demonstrate what DXR can do. In the case of hitting the fallback layer, DXR will be executed via DirectCompute compute shaders, which are already supported on all DX12 GPUs. On the whole GPUs are not great at ray tracing, but they’re not half-bad either. As GPUs have become more flexible they’ve become easier to map to ray tracing, and there are already a number of professional solutions that can use GPU farms for ray tracing. Faster still, of course, is mixing that with optimized hardware paths, and this is where hardware acceleration comes in. Microsoft isn’t saying just what hardware acceleration of DXR will involve, and the high-level nature of the API means that it’s rather easy for hardware vendors to mix hardware and software stages as necessary. This means that it’s up to GPU vendors to provide the execution backends for DXR and to make DXR run as efficiently as possible on their various microarchitectures. When it comes to implementing those backends in turn, there are some parts of the ray tracing process that can be done in fixed-function hardware more efficiently than can be done shaders, and as a result Microsoft is giving GPU vendors the means to accelerate DXR with this hardware in order to further close the performance gap between ray tracing and rasterization. For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option. Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.
Basically Microsoft knew that the vendors would have to integrate hardware aspects of this differently so they intentionally left the acceleration part open - like RTX for example taps Volta's Tensor cores to do the denoising because they are significantly faster at Matrix operations. AMD on the other hand, may use it's FP16 cores to accelerate it's variant. But they both will use DXR as the 'base'. It's also not just for gaming - like one of the things I keep see people saying all over is how Epic's Starwars Demo was using 8 GV100 chips to get 24fps but the raytracing resolution there is way higher. If you watch the full GDC presentation (i'm going to paraphrase it here and perhaps get some of the numbers wrong) but for games they said they have a budget of like 1 ray per pixel per light for real time WITH RTX. Some of the effects used in that Star Wars demo required a resolution of 4 rays per pixel per light just to get the effect working and the entire scene was traced as opposed to just AO like in upcoming Metro. Epic is trying to break into pre-production in movies with their engine and DXR (in combination with the performance benefits of RTX) is accurate/fast enough that they are able to get the quality of big budget renderers like Arnold that typically require hours of offline rendering into a realtime product like Unreal and a DGX-1.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Yes that is correct. You can utilize DX12's raytracing (DXR) without RTX. DXR will actually work on all current generation hardware, it's just accelerated by RTX when it's detected and similarly AMD's implementation. It also seems like not only can hardware vendors accelerate this but it's possible for software developers to come up with their own CPU/DirectCompute based denoising algorithms and accelerate it as well. Here is some relevant sections from Anandtech's breakdown, I bolded the parts I find important: Basically Microsoft knew that the vendors would have to integrate hardware aspects of this differently so they intentionally left the acceleration part open - like RTX for example taps Volta's Tensor cores to do the denoising because they are significantly faster at Matrix operations. AMD on the other hand, may use it's FP16 cores to accelerate it's variant. But they both will use DXR as the 'base'.
Well, that explains a lot. Thanks for going in-depth about all of this - RTX isn't as stupid as I thought it was. The way Nvidia has been pushing it (along with DXR) made it seem like all of it was their plan, their technology, and exclusive to them. I guess it makes sense - they want people to think that way, but I guess that mislead me.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
At some point, there's going to be a film related game released that will boldly declare that it's using the same effects as used in the actual movie.