3DMark Time Spy Raytracing Benchmark Update To Arrive Next Month

Published by

Click here to post a comment for 3DMark Time Spy Raytracing Benchmark Update To Arrive Next Month on our message forum
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
This is obviously an Nvidia sponsored benchmark. All its meant to do is show off the RTX cards capabilties vs other non-RT cards.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
vbetts:

Just to note, this is running on DXRT not RTX. RTX is the library Nvidia uses for Turing RTX cores to communicate with DXRT, without support it won't use them. The only requirement for DXRT is DX12 level hardware.
RTX Gameworks Libraries are just prepackaged DXR effects for easy integration in games. RTX accelerates DXR, regardless to whether the integration is through a Gameworks library or a developers own DXR implementation. Soon as the game invokes a DXR call, Nvidia's driver automagically submits that work to the RT cores/tensor for denoising.
alanm:

This is obviously an Nvidia sponsored benchmark. All its meant to do is show off the RTX cards capabilties vs other non-RT cards.
It's meant to benchmark Raytracing which is now part of DX12/Vulkan. Granted it aligns well with Nvidia's release but Microsoft's DXR was announced six months ago. AMD can accelerate Raytracing on it's hardware as well - especially Vega which can utilize RPM (INT-8). Nvidia just has an advantage because Turing supports INT4 and Nvidia has the denoising tech, which allows them to have similar image quality with less rays.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Denial:

RTX Gameworks Libraries are just prepackaged DXR effects for easy integration in games. RTX accelerates DXR, regardless to whether the integration is through a Gameworks library or a developers own DXR implementation. Soon as the game invokes a DXR call, Nvidia's driver automagically submits that work to the RT cores/tensor for denoising.
Well, in that case I'm back to my original thought that I'll be running this benchmark at 320x240! After vbetts was saying that this benchmark was not going to use the RT cores I thought briefly that this benchmark would run ok on my GTX 1070, but alas looks like I'll be jokingly running at 320x240!
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Pimpiklem:

Well you cant expect benchmark programs to benchmark proprietary technology that would just be unfair. Why do you think we haven't seen this (Serra) because it would be unfair to nvidia. You see some of this used in forza 7 game. Its why the vega won. https://www.overclock3d.net/news/gpu_displays/amd_rx_vega_-_what_is_rapid_packed_math/1 https://www.guru3d.com/news-story/forza-7-pc-graphics-performance-benchmarks.html https://www.overclock3d.net/gfx/articles/2017/07/31102128179l.jpg
Turing can go down to INT4 operations. Not that there is good use-case for games.
H83:

Not really. AMD still offers very nice cards for every market with the exception of the high end. Their problems was that they bet heavily on HBM memory and that was an huge failure at least for now... And with Nvidia on a roll things got eve worse of course. But as long as they continue to make good cards with good prices they will continues to sell them. As for high end maybe they should create a new dual card like a Vega 64X2. Of course they have to fix their power usage first...
It was necessity for Fiji. HBM1 there ate 12W under load and 18W when OCed/overvolted. Required bandwidth with GDDR5 would be like 70W, that's Power which would not be available to GPU. I really did like my Fury X and I think it is best released GPU AMD produced till the date. Vega has it hard. I am interested in seeing what can 7nm Vega with fixes do at what wattage. But only as outlook for Navi. Big reason why nVidia is winning is not only because they designed their GPUs in way they can tick faster, but because they have superior power gating for long time.
data/avatar/default/avatar26.webp
H83:

Not really. AMD still offers very nice cards for every market with the exception of the high end. Their problems was that they bet heavily on HBM memory and that was an huge failure at least for now... And with Nvidia on a roll things got eve worse of course. But as long as they continue to make good cards with good prices they will continues to sell them. As for high end maybe they should create a new dual card like a Vega 64X2. Of course they have to fix their power usage first...
Vega 64X2? take a horrible graphics cards and double it? no thx. they need to revamp their architecture with 7nm and to bring some good Perf/dollar like they do and also focus on perf/watt.