Intel NUC 13 Pro (Arena Canyon) review
Endorfy Arx 700 Air chassis review
Beelink SER5 Pro (Ryzen 7 5800H) mini PC review
Crucial T700 PCIe 5.0 NVMe SSD Review - 12GB/s
Sapphire Radeon RX 7600 PULSE review
Gainward GeForce RTX 4060 Ti GHOST review
Radeon RX 7600 review
ASUS GeForce RTX 4060 Ti TUF Gaming review
MSI GeForce RTX 4060 Ti Gaming X TRIO review
GeForce RTX 4060 Ti 8GB (FE) review
Battlefield V gets support for Nvidia raytracing technology - demo
In case you missed it yesterday and as posted in our GeForce RTX announcement news, during the Nvidia event the company showed a new raytracing demo with Battlefield V, and it was showing impressive possibilities of the new technology. Check the reflection of the flamethrower and fires on different surfaces, including on the wet ground and the shiny car.
« Battlefield V Open Beta Next Month · Battlefield V gets support for Nvidia raytracing technology - demo
· Inno3D Announces New iChill GeForce RTX 20xx Series »
Battlefield V Open Beta Next Month - 08/21/2018 08:28 AM
Electronic Arts has announced that the open beta for Battlefield V will start on September 6. From that date everyone can participate in the beta; gamers with subscriptions and those who have placed a...
Official Battlefield 5 Gamescom 2018 Devastation of Rotterdam Trailer - 08/16/2018 04:37 PM
DICE and Electronic Arts have released the official Gamescom 2018 gameplay trailer for Battlefield 5. In this brand new trailer, we get to see new maps and gameplay. From the shattered streets of Rott...
Battlefield V: new trailer and Betas at Gamescom - 08/16/2018 08:43 AM
EA and DICE will be showing a beta version of Battlefield V for the first time at Gamescom next week, producer David Sirland has shared via Twitter. Later today there will be a new trailer we can ...
Battlefield 5 - Closed Alpha #2 begins on Aug 14th - 08/10/2018 08:48 AM
DICE and Electronic Arts have announced that the second closed alpha testing for Battlefield 5 will begin on August 14th. According to the teams, the second Closed Alpha is built directly from the f...
Road to Battlefield 5 - Apocalypse Trailer - 08/03/2018 09:21 AM
With no end in sight, enter a living hell and participate in the most brutal battles of the cataclysmic Great War. Our journey along the road to Battlefield V continues - and there are many giveaways ...
fantaskarsef
Senior Member
Posts: 14623
Joined: 2014-07-21
Senior Member
Posts: 14623
Joined: 2014-07-21
#5576173 Posted on: 08/21/2018 05:00 PM
That's the thing. Developers are not going to make 2 separate code paths. DX12 was meant to be above that and there could be different visuals as result too which would be very undesirable for any multiplayer game.
Then there is fact that AMD's "library" would be open source and nVidia could optimize quite easily if dev used just theirs.
Sadly that story has been going on for years now, exactly like that. If anything it's Nvidia's money making a closer / earlier adaption into RTX possible as they spend $ (manhours) into working together with m$. Only shows how DX12 is kind of lackluster as to what it was hailed as, and what it really is, YEARS later.
That's the thing. Developers are not going to make 2 separate code paths. DX12 was meant to be above that and there could be different visuals as result too which would be very undesirable for any multiplayer game.
Then there is fact that AMD's "library" would be open source and nVidia could optimize quite easily if dev used just theirs.
Sadly that story has been going on for years now, exactly like that. If anything it's Nvidia's money making a closer / earlier adaption into RTX possible as they spend $ (manhours) into working together with m$. Only shows how DX12 is kind of lackluster as to what it was hailed as, and what it really is, YEARS later.
Denial
Senior Member
Posts: 14091
Joined: 2004-05-16
Senior Member
Posts: 14091
Joined: 2004-05-16
#5576183 Posted on: 08/21/2018 05:19 PM
The goal behind DX12, aside from the CPU overhead stuff, was just to give developers lower level access to hardware. Picture a machine with 20 layers, in DX11 developers had access to layers 20-10, 10-1 was a complete mystery to them.. that's where AMD/Nvidia engineers reside and write driver code. With DX12 developers now have access to layers 20-5. The drivers still play a role but the developers can see/change much more going on at the very core of the system. The lower you go the more complicated things get, the more you need to get in there and write things for specific vendors, architectures - things Nvidia/AMD driver teams have been doing for decades now in the hands of game developers.. but the lower you go with optimizing the more performance you get out of those systems.
The thing is most devs don't care to do that work. They have a budget for their project and why would they dedicate 7-10 developers to re-architecting low level code that Nvidia/AMD already handles pretty well? Like at most you get 10-15% performance in certain scenarios, pretty cool, but you can do the same by just like cutting a few poly counts down with a slider, which doesn't require a dedicated team. The big engine guy's - companies like Epic/Unity/DICE with Frostbite, etc.. those guys have a reason for it. They are architecting engines which many games utilize.. so they already have those teams in place with developers that are familiar with that level of development. But they still can't optimize for every scenario that random company A utilizing their engine is going to do. So even there they aren't going down to the metaphorical "level 5" of optimization.
Most of Nvidia's GameWorks libraries are open source now - RTX libraries use DXR as the base so as long as the RTX library is open AMD shouldn't have a problem accelerating. I don't think Nvidia has anything to gain by keeping the RTX libraries Geforce only - I think they are going to rely on the fact that they've been dedicating the last 10 years of R&D to deep learning/self driving, are super far ahead there and can use that to make their hardware better at accelerating. The goal for them should be mass-scale adoption of DXR, which seems to be what they are trying to push.
The goal behind DX12, aside from the CPU overhead stuff, was just to give developers lower level access to hardware. Picture a machine with 20 layers, in DX11 developers had access to layers 20-10, 10-1 was a complete mystery to them.. that's where AMD/Nvidia engineers reside and write driver code. With DX12 developers now have access to layers 20-5. The drivers still play a role but the developers can see/change much more going on at the very core of the system. The lower you go the more complicated things get, the more you need to get in there and write things for specific vendors, architectures - things Nvidia/AMD driver teams have been doing for decades now in the hands of game developers.. but the lower you go with optimizing the more performance you get out of those systems.
The thing is most devs don't care to do that work. They have a budget for their project and why would they dedicate 7-10 developers to re-architecting low level code that Nvidia/AMD already handles pretty well? Like at most you get 10-15% performance in certain scenarios, pretty cool, but you can do the same by just like cutting a few poly counts down with a slider, which doesn't require a dedicated team. The big engine guy's - companies like Epic/Unity/DICE with Frostbite, etc.. those guys have a reason for it. They are architecting engines which many games utilize.. so they already have those teams in place with developers that are familiar with that level of development. But they still can't optimize for every scenario that random company A utilizing their engine is going to do. So even there they aren't going down to the metaphorical "level 5" of optimization.
Most of Nvidia's GameWorks libraries are open source now - RTX libraries use DXR as the base so as long as the RTX library is open AMD shouldn't have a problem accelerating. I don't think Nvidia has anything to gain by keeping the RTX libraries Geforce only - I think they are going to rely on the fact that they've been dedicating the last 10 years of R&D to deep learning/self driving, are super far ahead there and can use that to make their hardware better at accelerating. The goal for them should be mass-scale adoption of DXR, which seems to be what they are trying to push.
Stormyandcold
Senior Member
Posts: 5844
Joined: 2003-09-15
Senior Member
Posts: 5844
Joined: 2003-09-15
#5576279 Posted on: 08/21/2018 08:29 PM
If the Tomb Raider RTX performance reports are anything to go by, then tbh I could care less about proprietary code. The performance needs to be increased by 100%+ to make it even viable and if that means Nvidia needs to take it in-house, then, so be it. Long-term is another story, but, seeing as AMD and Intel can't run RT at anywhere near the performance of Nvidia with their hardware, then, it's meaningless to them anyway.
I am talking about implementation. Game either uses DX12 code written by someone in studio, or uses nVidia's proprietary library optimized for their HW. (With usual consequences for other companies.)
If the Tomb Raider RTX performance reports are anything to go by, then tbh I could care less about proprietary code. The performance needs to be increased by 100%+ to make it even viable and if that means Nvidia needs to take it in-house, then, so be it. Long-term is another story, but, seeing as AMD and Intel can't run RT at anywhere near the performance of Nvidia with their hardware, then, it's meaningless to them anyway.
Mundosold
Senior Member
Posts: 243
Joined: 2012-10-04
Senior Member
Posts: 243
Joined: 2012-10-04
#5576308 Posted on: 08/21/2018 09:51 PM
I feel so old knowing that running after the latest GAME-CHANGING gpu tech is always a waste of money. Remember DX10 hype? Everyone running out and spending huge on the first DX10 video cards? By the time DX10 was used in any meaningful degree, those cards were too obsolete to handle DX10 games anyway. You end up paying a huge early adopters fee to get nothing out of it. I guarantee that when RT takes off, the first game to make big use of it will barely even be playable on a 2080 Ti.
I feel so old knowing that running after the latest GAME-CHANGING gpu tech is always a waste of money. Remember DX10 hype? Everyone running out and spending huge on the first DX10 video cards? By the time DX10 was used in any meaningful degree, those cards were too obsolete to handle DX10 games anyway. You end up paying a huge early adopters fee to get nothing out of it. I guarantee that when RT takes off, the first game to make big use of it will barely even be playable on a 2080 Ti.
Click here to post a comment for this news story on the message forum.
Senior Member
Posts: 11808
Joined: 2012-07-20
That's the thing. Developers are not going to make 2 separate code paths. DX12 was meant to be above that and there could be different visuals as result too which would be very undesirable for any multiplayer game.
Then there is fact that AMD's "library" would be open source and nVidia could optimize quite easily if dev used just theirs.