NVIDIA officially announces PhysX 5.0

Published by

Click here to post a comment for NVIDIA officially announces PhysX 5.0 on our message forum
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
This demo looks like something made 10 years ago.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
Nice to see something that matters is finally getting pushed again 😀.
data/avatar/default/avatar26.webp
Backstabak:

That doesn't outright showcase physics and the image of a game with better visuals is still going to sell better.
For the most part I agree that visuals will trump physics for most people. Trying to market a game's physics is tough with still imagery, but we're also lucky enough to have easy access to video content, which can show off physics. I also think back to HL2 which was highly praised in large part due to it's physics and using it as a game mechanic and not just another aspect of eye candy. I think, in the end, VR will drive a push towards better physics. Visuals may be more desirable for games played on a monitor or TV, but physics will be key to making VR more immersive and engaging. Visuals obviously have a big role to play in that realm as well, but I think whether we realize it or not, being able to interact with a virtual world in a way that feels like objects have weight and interact with each other will be more important than the visual fidelity.
https://forums.guru3d.com/data/avatars/m/247/247876.jpg
To hell destructible surfaces. Give me natural boobs physics!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Prince Valiant:

Nice to see something that matters is finally getting pushed again 😀.
Well, it is going to fade into obscurity again if Nvidia continues to lock it to their platform. Otherwise, I'd rather go for something like Havok or Bullet. Nvidia could also benefit if they would more easily allow for a GPU dedicated to CUDA/PhysX. I haven't attempted this in a long time in Windows (in Linux it's not that hard, but, there aren't many PhysX-enabled games for Linux) but doing so would make it an easy way for Nvidia to get rid of some of their old excess Pascal stock. What I think would be REALLY cool and a hot seller for Nvidia is a M.2 CUDA card. Some people (like myself) don't care about having M.2 storage or networking, but, 4x PCIe lanes is plenty for whatever GPU you could fit on a 22x80mm card. Of course, you could always use a M.2 to PCIe riser, but, that's not quite the same thing as having something purpose-built.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
its not locked to nvidia's platform and never has been.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Well, it is going to fade into obscurity again if Nvidia continues to lock it to their platform. Otherwise, I'd rather go for something like Havok or Bullet. Nvidia could also benefit if they would more easily allow for a GPU dedicated to CUDA/PhysX. I haven't attempted this in a long time in Windows (in Linux it's not that hard, but, there aren't many PhysX-enabled games for Linux) but doing so would make it an easy way for Nvidia to get rid of some of their old excess Pascal stock. What I think would be REALLY cool and a hot seller for Nvidia is a M.2 CUDA card. Some people (like myself) don't care about having M.2 storage or networking, but, 4x PCIe lanes is plenty for whatever GPU you could fit on a 22x80mm card. Of course, you could always use a M.2 to PCIe riser, but, that's not quite the same thing as having something purpose-built.
PhysX is the default physics system in Unreal/Unity - the features here and the SDK are all CPU PhysX libraries. They aren't vendor locked and the source is available (although it's definitely not open like AMD is). As for the GPU libraries - yeah but w/e.. I think the industry realizes that outside of particle sims GPUs are bad for physics. 90% of games are shader performance limited and with 8+ cores becoming more mainstream and FEM type physics also becoming the new "bling" - CPU physics is where everyone is aiming again.
data/avatar/default/avatar37.webp
The mindshare of nvidia is huge. We still see people in here, in a g33k website where people usually realizes what they say, talking about PhysX like it is 2006. PhysX is used in most games today, all the cloth effects, wind and rigid body physics and simple particles, and ALL, I mean ALL, run at the CPU not at the GPU. PhysX GPU is pretty much dead, aside from Metro Exodus, in 2 years it wasn't released a single game using it And nobody even talked about PhysX on Metro, only talked about RayTracing. I'm soched when I see somebody asking for a secondary GPU just for PhysX, lol in 2019. There are though GPU Particle effects in a lot of games nowadays, like in the latest DOOM which interact with the environment and even your weapon. PhysX5 will be processed by the CPU, and this technique called FEM is already used in BeamNG and also Wreckfest. AMD also shared their FEM libraries on GPUOpen Initiative, unity and unreal will surely implement either their own verstion or AMD version or PhysX5 for the developers to use. But it is a CPU thing and it is highly multithreaded according to AMD which makes sense since we have CPUs nowadays able of providing >144FPS and more cores than games can mostly use effectivelly. @Denial Exactly
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Denial:

As for the GPU libraries - yeah but w/e.. I think the industry realizes that outside of particle sims GPUs are bad for physics. 90% of games are shader performance limited and with 8+ cores becoming more mainstream and FEM type physics also becoming the new "bling" - CPU physics is where everyone is aiming again.
https://www.dsogaming.com/news/nvidia-open-sources-physx-can-now-support-gpu-accelerated-effects-on-amds-graphics-cards/ what about the gpu libraries? AMD had the opportunity to develop a translation layer for cl<>physx since 3.x
https://forums.guru3d.com/data/avatars/m/72/72485.jpg
Astyanax:

its not locked to nvidia's platform and never has been.
Yes it was. Batman Arkham series, Mafia 2, Mirror's Edge, and a few others I can't remember from that period were all about pushing Nvidia Physx.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
jbscotchman:

Yes it was. Batman Arkham series, Mafia 2, Mirror's Edge, and a few others I can't remember from that period were all about pushing Nvidia Physx.
Offers for support was made to AMD and they rejected it. So no, never has been "locked to", AMD just never made the effort to adopt it.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Khronikos:

I think you partially underestimate how powerful a 4 core 8 thread CPU can be overclocked above 4GHz. They will still be fine for one more gen outside of the top games doing crazy stuff. We already see GPU bottlenecks on games at 4K lol. You think somehow the mainstream, which is 50% plus on 4 core, is suddenly going to change in 3-5 years lol? I think you overestimate how much money people have. Of course certain games will push things, and of course people will have to turn down settings a bit. But games will still run well on these CPUs. GPUs will be pushed to the max to get native 4K on the best games. We will be bottlenecked there for some time with the mainstream segment.
Sorry, I have to strongly disagree. You make absurd point. Someone who goes for 4K screen and adequate GPU will not have problem to add extra $200 for proper CPU. Want to make point? Base it on something remotely real. And yes, I have seen 4C/8T CPUs at 4.8GHz stutter. They get fully utilized and all background processes for OS, drivers are screwed...
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Astyanax:

its not locked to nvidia's platform and never has been.
Yeah, it was always fault of dumb game developer who could not properly thread x87 code for software code path. So they gave up eventually. But from practical standpoint, users with nVidia's GPU got extra features.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Astyanax:

https://www.dsogaming.com/news/nvidia-open-sources-physx-can-now-support-gpu-accelerated-effects-on-amds-graphics-cards/ what about the gpu libraries? AMD had the opportunity to develop a translation layer for cl<>physx since 3.x
They can't on consoles and no one is going to take the time to rewrite GPU PhysX for AMD when a huge % of player base wouldn't take advantage. Honestly no one is going to take the time regardless because GPU PhysX is pretty useless - hence why it's basically in nothing.
data/avatar/default/avatar30.webp
There have been reviews published (Hardware Unboxed, Gamer's Nexus) showing that some games are already getting CPU bound at 6 cores at all resolutions (including 4k), and we are just getting started. 4C/8T is on the cusp of being inadequate, because 6C/12T is now the standard for entry level gaming PCs, and that's the target that AAA games will be aiming at. Clock speeds are important, but they cannot replace the power of having multiple threads doing the load; after all, a 5GHz CPU is only 20% faster than a 4GHz one (all other things being equal), but 8 cores has 100% more compute power than 4 cores when fully utilized - which a lot of games today are capable of doing. The days of developers needing to put in thousands of man-hours to support more cores are gone, since now the game engines themselves give them the tools they need, and the same goes for physics simulation. The support for unlimited cores is already here, developers just need to organize their code to work with that function in the engine to use it. As for PhysX shown in the demo, the cool thing there is the realistic cloth simulation. The Unreal demos are great looking, but they are permanent solid object manipulation, which, along with flowing liquids, are the least compute heavy thing you can do with physics engines. Cloth and hair physics are REALLY taxing on a CPU/GPU - the objects keep interacting and shift constantly based on movement and the environment .This is why it's only done in the most basic way in games, without accounting for collisions. When you throw 2 or more cloth sims on top of one another, things get super heavy compute-wise - things like putting two or more physics-based cloth objects so they react to one another will absolutely crush any CPU available right now - it's up there with raytracing as far as compute goes. If nVidia has managed to get Tensor cores to accellerate that process to realtime, then the difference in games will be far more apparent than just blowing up a wall or reflecting the sky into a puddle. Think about how armor looks on characters in games - the chest pieces deform as the body moves, rather than being static and moving around the body underneath because the physics to do that movement is far too compute heavy to do in real time. Think how a human body is soft and can bend when interacted with by another object - imagine being able to render in real time the way your skin on your arm moves when you grab it with your hand, how the waistband of your pants pushes into your belly, or how your shirt rests over your pants and moves in relation to them as you move. This stuff takes minutes per frame on the simplest scale with the CPUs and GPUs we have today, and so the game devs have to fudge things, and those shortcuts create an uncanny valley effect. If they have gotten even the most basic versions of that stuff to real time rendering, that will be the biggest game improvement that we have seen in a long time.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

PhysX is the default physics system in Unreal/Unity - the features here and the SDK are all CPU PhysX libraries. They aren't vendor locked and the source is available (although it's definitely not open like AMD is).
Keep in mind I'm referring to hardware-accelerated PhysX, which is typically what the particle effects depend on.
As for the GPU libraries - yeah but w/e.. I think the industry realizes that outside of particle sims GPUs are bad for physics. 90% of games are shader performance limited and with 8+ cores becoming more mainstream and FEM type physics also becoming the new "bling" - CPU physics is where everyone is aiming again.
GPUs are great for physics, the problem is they barely have the resources to render a scene as-is. So add complex equations on an already overloaded pipeline and it's bound to slow things down.
data/avatar/default/avatar23.webp
schmidtbag:

Keep in mind I'm referring to hardware-accelerated PhysX, which is typically what the particle effects depend on. GPUs are great for physics, the problem is they barely have the resources to render a scene as-is. So add complex equations on an already overloaded pipeline and it's bound to slow things down.
That's what RTX is for - separating functions that are specialized away from general compute cores. For example, the Tensor cores in an RTX GPU are sitting idle when playing a game. If they have optimized PhysX to use them instead of standard cores, they tap into extra performance without giving up anything else.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Keep in mind I'm referring to hardware-accelerated PhysX, which is typically what the particle effects depend on. GPUs are great for physics, the problem is they barely have the resources to render a scene as-is. So add complex equations on an already overloaded pipeline and it's bound to slow things down.
GPUs are great for certain physics calculations on a certain number of objects. The FEM stuff that AMD are doing with FEMFX and Nvidia with 5.0 are not great on GPUs -- the AMD presentation goes into detail about this and AMD isn't accelerating it all on GPUs. But even before FEM any kind of mesh deformation was always significantly faster on CPUs than GPUs due to the nature of the simulation itself. Particles? Sure GPU them away (most games are implementing their own GPU physics systems specifically for particles, see Warframe for example) but collision, soft body, etc are all mostly faster on CPUs unless you're doing it on tons of objects (which generally isn't the case with these examples) - and even when it is the case modern CPUs with AVX2/512 have enough performance to push through most of these instead of stealing shader power from the GPU. I don't think there is much of a need to push physics to GPUs anymore which is why it's all but abandoned.
illrigger:

That's what RTX is for - separating functions that are specialized away from general compute cores. For example, the Tensor cores in an RTX GPU are sitting idle when playing a game. If they have optimized PhysX to use them instead of standard cores, they tap into extra performance without giving up anything else.
Physics calcs typically require a high level of accuracy - something tensor's don't necessarily offer. Also Tensor's do cut into the total cards TDP. When those Tensor cores are idle, the TDP of the card is 250w and when they are active the card is still 250w. So utilizing them is definitely pulling some degree of performance from the shaders. You can easily see this by running a pytorch workload and a basic GPU benchmark and watching the performance of both suffer when either or is activated.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
mbk1969:

To hell destructible surfaces. Give me natural boobs physics!
[youtube=BTonqTaqKEs]
data/avatar/default/avatar10.webp
Denial:

GPUs are great for certain physics calculations on a certain number of objects. The FEM stuff that AMD are doing with FEMFX and Nvidia with 5.0 are not great on GPUs -- the AMD presentation goes into detail about this and AMD isn't accelerating it all on GPUs. But even before FEM any kind of mesh deformation was always significantly faster on CPUs than GPUs due to the nature of the simulation itself. Particles? Sure GPU them away (most games are implementing their own GPU physics systems specifically for particles, see Warframe for example) but collision, soft body, etc are all mostly faster on CPUs unless you're doing it on tons of objects (which generally isn't the case with these examples) - and even when it is the case modern CPUs with AVX2/512 have enough performance to push through most of these instead of stealing shader power from the GPU. I don't think there is much of a need to push physics to GPUs anymore which is why it's all but abandoned. Physics calcs typically require a high level of accuracy - something tensor's don't necessarily offer. Also Tensor's do cut into the total cards TDP. When those Tensor cores are idle, the TDP of the card is 250w and when they are active the card is still 250w. So utilizing them is definitely pulling some degree of performance from the shaders. You can easily see this by running a pytorch workload and a basic GPU benchmark and watching the performance of both suffer when either or is activated.
Context switching was the largest penalty with PhysX on the GPU. This has improved somewhat, that is still a major performance issue with GPU PhysX. If they could leverage Tensor cores for PhysX, that might ease that penalty.