New UL Time Raytracing Benchmark Will Not be Time Spy
So there is a bit of confusion that started a day or so ago, UL (previously Futuremark) would be releasing an update to Time Spy with raytracing support. That bit of information seems to be incorrect, as it now seems it will be a new separated benchmark, we just had a reply from the designers, UL.
Multiple media are confirming or denying the Time Spy RT optimized test, however, to be sure I've just contacted UL to clear up some of the confusion. As it seems, UL is building a completely new Raytracing benchmark (thus not Time Spy or an update to it), and will release that hopefully in the very same timeframe as the RTX release from NVIDIA.
Below is the reply we received back from UL:
We are designing a new test from the ground up to use Microsoft DirectX Raytracing, which will be added to 3DMark app as an update.
Please note that our upcoming benchmark is not a Time Spy test. Changing Time Spy in such a significant way would invalidate comparisons with previous scores, which isn’t something we want to do. The new test will produce its own benchmarking scores that will not be comparable with other tests like Time Spy and Fire Strike benchmark tests.
Unfortunately, some media misunderstood that the upcoming test is not a Time Spy test. We requested to update the story, but unfortunately, they have not done it.
We will keep you informed of our latest developments. And if there is anything else I can help you with, please let me know.
Kind regards,
UL
Senior Member
Posts: 6074
Joined: 2011-01-02
As far as people support and pay Nvidia for their exclusive implementations, things not gonna change any soon.
We again have to wait for AMD to come around and introduce open ray tracing method that works on any hardware.
Till, then we pay premium to Nvidia.
To be honest, I am glad Nvidia doing it. Someone has to start pushing new tech. And preordering RTX hardware may seem stupid, but it does invest into further development.
Just as people bought early Tesla cars.
So when Jensen Huang mentioned talking to his employees in his Nvidia Ray Tracing presentation: "Cmon guys, we have to make things look like things". It kinda made sense.
With benchmarking software, this obviously limited to Nvidia, so we rather see specific benchmarks aimed for Nvidia GPUs.
Was there benchmarks for PhysX back in old days?
Senior Member
Posts: 11446
Joined: 2012-07-20
With benchmarking software, this obviously limited to Nvidia, so we rather see specific benchmarks aimed for Nvidia GPUs.
Was there benchmarks for PhysX back in old days?
Nope, wrong, from bottom-up. Benchmark here will be DX12 and will run on all HW depending on driver being compatible with required DX12 feature.
Same goes for PhysX benchmarks like fluidmark, it benchmarked technology on all supported HW which went beyond nVidia's. That's why you can run it on your CPU even today.
Doing Benchmark which could run only on those 3 GPUs from same architecture should not be called benchmark. It should be called showcase.
Senior Member
Posts: 6074
Joined: 2011-01-02
But, can ray tracing be performed on CPU as PhysX or non-RTX GPU in first place?
Senior Member
Posts: 11446
Joined: 2012-07-20
Yes, you need to hold entire scene and all textures which are in graphics memory in RAM too for it to be possible.
nVidia is not really doing that many samples, those added units do not have grunt, they do basic work and clean it. Same would apply to CPU. Except you would need 16C/32T CPU or more to have spare cycles for all other important stuff. + all that used memory bandwidth and latency requirements.
It would be doable, but not very fast anyway, unless there was very optimized code which would have data blocks required always in CPU's caches in advance.
Thing is that while information delivered by raytracing is not that large per frame, it requires quite some work and data movements.
Theoretically if GTX 2080(Ti) was just regular GPU with nV-Link, then nVidia could made secondary accelerator with 8GB VRAM to fit required data, and optimize it for quick access and high bandwidth for all those new raytracing things.
Final information which is used to enhance each frame could then be delivered to GPU. And there would be no additional performance impact as each card could run in parallel.
Raytracing card would be funny thing, because instead of "rendering" at certain resolution, it would be putting pixels into vector space. (Virtually unlimited resolution.) And at time driver says "Stop, filter image out, resize to this, and send to GPU.", it would quickly finish.
Senior Member
Posts: 11446
Joined: 2012-07-20
https://www.imgtec.com/blog/gdc-2016-ray-tracing-graphics-mobile/?cn-reloaded=1
That was a rather interesting read after watching the below video.
So is the above new benchmark plain marketing for their next benchmark addon\standalone package ? Is adding ray tracing really that complex?
The article mentioned OpenGL ES extensions\Vulkan in Unity in 2016...
Yeah, I remember all their demos. They made GPUs directly capable to do it. And they are making low power GPUs for quite some years.
Adding raytracing via hack into current DX/Vulkan implementation is not that hard if you know how. But adding standard part of code into DX12 took some effort.
Adding raytracing units into GPU can be done in much cheaper way than nV did. In both transistor cost paid by nV and clients and power efficiency paid by clients.
nV just threw at it tech, they already had.
I always hope that old dogs who run from PC market may return one day.