New UL Time Raytracing Benchmark Will Not be Time Spy

Published by

Click here to post a comment for New UL Time Raytracing Benchmark Will Not be Time Spy on our message forum
data/avatar/default/avatar34.webp
its kinda funny for me,as its like splitting things, RayTrace is like have its own part/path rather than global like tessellation, shaders, multi-thread etc well i guess its because card without raytracing cores perform so bad compared to card that have dedicated raytrace (RTX) until now raytrace been always expensive things in CG, till nvidia bringing it with turing well i guess for sometime, raytrace will be "exclusive" till depends on how market goes if it become common things, then it will considered a global-factor
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
I am all for something new to the gfx benches another bench cant hurt I think rt will be common in a few years and till someone brings something better it will catch on.
https://forums.guru3d.com/data/avatars/m/101/101279.jpg
I wish Futuremark would hire some talent. More than a decade in and their Graphics still look terrible and unoptimised.
data/avatar/default/avatar27.webp
slyphnier:

its kinda funny for me,as its like splitting things, RayTrace is like have its own part/path rather than global like tessellation, shaders, multi-thread etc well i guess its because card without raytracing cores perform so bad compared to card that have dedicated raytrace (RTX) until now raytrace been always expensive things in CG, till nvidia bringing it with turing well i guess for sometime, raytrace will be "exclusive" till depends on how market goes if it become common things, then it will considered a global-factor
Some parts need to be packed in BVH and this way different of how are working the current engine. Well it is really hard to compare raytracing on 3d modeling softwares and raytracing on DirectX ray tracing or RTX... first you dont render complete frames with DirectX and RTX, only shadow, reflection. It dont cover all light sources as in 3Dmodeling raytracing.. (hello architecture interior rendering)... Then ,There's not one way of doing raytracing, raytracing is a generic technical term. each engine use different allgorythm ( pathracing, bidir ) and different samplers ( metropolis, Sobol ( LuxcoreRender) etc ), different light strategy. etc etc.. It is allready really hard to compare 2 renders API on CG softwares, as many things differ ( Cycles, Luxcore,Vray, etc ) This said i will wait to see how the RT cores work and if they really speedup the renders on 3D softwares that i use ( Blender, Max, Maya, substances etc ) ..Need to see the compatibility of the engines with OptiX ..
data/avatar/default/avatar31.webp
https://www.imgtec.com/blog/gdc-2016-ray-tracing-graphics-mobile/?cn-reloaded=1 That was a rather interesting read after watching the below video. So is the above new benchmark plain marketing for their next benchmark addon\standalone package ? Is adding ray tracing really that complex? The article mentioned OpenGL ES extensions\Vulkan in Unity in 2016...
GDC 2016 is the perfect opportunity to preview a number of OpenGL ES extensions we’ve developed for PowerVR Wizard GPUs; these extensions enable support for real-time ray tracing on the Wizard architecture and allow developers to use the hybrid rendering techniques described in this article inside a customized version of the Unity 5 game engine
[youtube=EGBbw2DLBEA]
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
ingeon:

https://www.imgtec.com/blog/gdc-2016-ray-tracing-graphics-mobile/?cn-reloaded=1 That was a rather interesting read after watching the below video. So is the above new benchmark plain marketing for their next benchmark addon\standalone package ? Is adding ray tracing really that complex? The article mentioned OpenGL ES extensions\Vulkan in Unity in 2016... [youtube=EGBbw2DLBEA]
Yeah, I remember all their demos. They made GPUs directly capable to do it. And they are making low power GPUs for quite some years. Adding raytracing via hack into current DX/Vulkan implementation is not that hard if you know how. But adding standard part of code into DX12 took some effort. Adding raytracing units into GPU can be done in much cheaper way than nV did. In both transistor cost paid by nV and clients and power efficiency paid by clients. nV just threw at it tech, they already had. I always hope that old dogs who run from PC market may return one day.
https://forums.guru3d.com/data/avatars/m/232/232130.jpg
As far as people support and pay Nvidia for their exclusive implementations, things not gonna change any soon. We again have to wait for AMD to come around and introduce open ray tracing method that works on any hardware. Till, then we pay premium to Nvidia. To be honest, I am glad Nvidia doing it. Someone has to start pushing new tech. And preordering RTX hardware may seem stupid, but it does invest into further development. Just as people bought early Tesla cars. So when Jensen Huang mentioned talking to his employees in his Nvidia Ray Tracing presentation: "Cmon guys, we have to make things look like things". It kinda made sense. With benchmarking software, this obviously limited to Nvidia, so we rather see specific benchmarks aimed for Nvidia GPUs. Was there benchmarks for PhysX back in old days?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
sverek:

With benchmarking software, this obviously limited to Nvidia, so we rather see specific benchmarks aimed for Nvidia GPUs. Was there benchmarks for PhysX back in old days?
Nope, wrong, from bottom-up. Benchmark here will be DX12 and will run on all HW depending on driver being compatible with required DX12 feature. Same goes for PhysX benchmarks like fluidmark, it benchmarked technology on all supported HW which went beyond nVidia's. That's why you can run it on your CPU even today. Doing Benchmark which could run only on those 3 GPUs from same architecture should not be called benchmark. It should be called showcase.
https://forums.guru3d.com/data/avatars/m/232/232130.jpg
But, can ray tracing be performed on CPU as PhysX or non-RTX GPU in first place?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
sverek:

But, can ray tracing be performed on CPU as PhysX or non-RTX GPU in first place?
Yes, you need to hold entire scene and all textures which are in graphics memory in RAM too for it to be possible. nVidia is not really doing that many samples, those added units do not have grunt, they do basic work and clean it. Same would apply to CPU. Except you would need 16C/32T CPU or more to have spare cycles for all other important stuff. + all that used memory bandwidth and latency requirements. It would be doable, but not very fast anyway, unless there was very optimized code which would have data blocks required always in CPU's caches in advance. Thing is that while information delivered by raytracing is not that large per frame, it requires quite some work and data movements. Theoretically if GTX 2080(Ti) was just regular GPU with nV-Link, then nVidia could made secondary accelerator with 8GB VRAM to fit required data, and optimize it for quick access and high bandwidth for all those new raytracing things. Final information which is used to enhance each frame could then be delivered to GPU. And there would be no additional performance impact as each card could run in parallel. Raytracing card would be funny thing, because instead of "rendering" at certain resolution, it would be putting pixels into vector space. (Virtually unlimited resolution.) And at time driver says "Stop, filter image out, resize to this, and send to GPU.", it would quickly finish.
https://forums.guru3d.com/data/avatars/m/232/232130.jpg
I member some people had low-end Nvidia GPU in their PCs dedicated to perform PhysX calculations. Is it something similar to it?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
sverek:

I member some people had low-end Nvidia GPU in their PCs dedicated to perform PhysX calculations. Is it something similar to it?
It would be, amount of data required to transfer between dedicated PhysX card and actual game engine is small. Similarly, if you have all data cloned in dedicated raytracing card, then actual per-frame-data which have to be transferred are small again.