Only GeForce RTX 2080 and 2080 Ti to support NVLink - No Multi GPU for 2070?

Published by

Click here to post a comment for Only GeForce RTX 2080 and 2080 Ti to support NVLink - No Multi GPU for 2070? on our message forum
data/avatar/default/avatar03.webp
Sounds fine to me. SLI/CFX are long dead. the only situation when it is acceptable is for the top end cards where if you need more performance, you have nothing to upgrade to basically. Finally its dead.
data/avatar/default/avatar30.webp
No SLI in the 1060 and now they move up a line to the 2070. Makes me wonder if the 1080TI i really just a titan and they are playing musical chairs with the card names. Usually the TI never comes out at launch.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Honestly, I'd be more interested in how that new NVlink really behaves than what card is compatible.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Lol, they know 2x 2070 would kick 2080ti ass and be cheaper doing it.. 12tflop tensor perf vs 10 by 2080ti says it all..
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
-Tj-:

Lol, they know 2x 2070 would kick 2080ti ass and be cheaper doing it.. 12tflop tensor perf vs 10 by 2080ti says it all..
mind reader 😀
https://forums.guru3d.com/data/avatars/m/115/115710.jpg
-Tj-:

Lol, they know 2x 2070 would kick 2080ti ass and be cheaper doing it.. 12tflop tensor perf vs 10 by 2080ti says it all..
Once you factor in all the negative side effect, compatibility issues and less than perfect scaling, it would probably match the 2080 Ti at best. I don't see why anyone would be stupid enough to buy 2x2070's when there's 2080 Ti available.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
I would be stupid enough, enough to save 200-300€ and still have better perf. Why wouldn't tensor core perf scale properly. That thing is independent from normal render.
https://forums.guru3d.com/data/avatars/m/229/229454.jpg
-Tj-:

Lol, they know 2x 2070 would kick 2080ti ass and be cheaper doing it.. 12tflop tensor perf vs 10 by 2080ti says it all..
Yep, sounds about right. Considering: - the ludicrous release pricing for 2080 Ti (the 2080 isn't exactly affordable either) - likely marginalish gains over Pascal in performance (to be confirmed but wouldn't expect miracles given number of CUDA cores) this Turing release starts to look like NV throwing fancy names around with RTX, charging people for that and kind of seeing what sticks. Wouldn't be surprised if they release GTX 2080 series later with slightly reduced price, without RT or Tensor (?) cores but practically the same performance in non-RT games. From what was seen in Tomb Raider RTX demo, enabling RT cripples performance even with 2080 Ti. Welcome to new generation of cinematic gaming where you can do FullHD at 30 fps! Somehow this Turing release starts to look less appealing by the day... (granted, any kind of ray-tracing at even 30 fps is impressive as such, but how many would really sacrifice half their frame rate for RT?)
data/avatar/default/avatar21.webp
-Tj-:

I would be stupid enough, enough to save 200-300€ and still have better perf. Why wouldn't tensor core perf scale properly. That thing is independent from normal render.
Tensor cores are not the problem. They are used in purely computational tasks and will scale beautifully. SLI rendering OTOH........ 2080 Ti is the way to go. But not for 1300 euros. I would give 1000 euros for one. It's a good deal Nvidia, you should take it 🙂
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
-Tj-:

Lol, they know 2x 2070 would kick 2080ti ass and be cheaper doing it.. 12tflop tensor perf vs 10 by 2080ti says it all..
While that's correct. nVLink does not make two physical cards into one logical. Therefore such user is still at mercy of multi-GPU support in game. And if they are used just for those new compute features, then Cuda/OpenCL/DirectCompute can't care less about some bridge between cards as it will open new context for each.
data/avatar/default/avatar12.webp
"could also offer some sort of rudimentary support linked over the PCIe bus" So.. AMD with Radeon R9 + have a rudimentary linked :P
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
is nvlink a monthly subscription?:p
https://forums.guru3d.com/data/avatars/m/189/189438.jpg
Anarion:

Once you factor in all the negative side effect, compatibility issues and less than perfect scaling, it would probably match the 2080 Ti at best. I don't see why anyone would be stupid enough to buy 2x2070's when there's 2080 Ti available.
Perhaps people that dont have Ti sort of money burning a hole in there pockets
https://forums.guru3d.com/data/avatars/m/224/224796.jpg
If it's true that enabling "Ray Tracing" in Tomb Raider smashes a RTX 2080 Ti down to ~35 FPS at a mere 1920x1080 resolution, then well --- screw that. I haven't gamed at 1080p in 6-7 years and I don't consider any kind of action or "FPS" game playable at 35 fps either. :rage:
https://forums.guru3d.com/data/avatars/m/142/142454.jpg
The Goose:

Perhaps people that dont have Ti sort of money burning a hole in there pockets
Nvidia have been engineering the market and pricing of their cards for a few years now to force people into buying more expensive cards. People used to get a x70 card and then get another in a couple of years time either new or second hand. I think Nvidia wanted to stamp that out. If Nvidia wanted to, they could work with game developers to get sIi working but IMHO, they have decided it's not in their interest to do so.
data/avatar/default/avatar03.webp
@southamptonfc : Ya , I remember the time when a **70 gard was 300 - 350 Euro ! Now it's just to much for some of us.
https://forums.guru3d.com/data/avatars/m/189/189438.jpg
I might of gone for the 2070 at some point but if there is no link.....no money, my Evga 1080ftw is going to last to the grave, pc gaming has always been a hobby for me not a luxury.
data/avatar/default/avatar32.webp
None of you watched Nvidias professional presentation a week before? I only watched the ending. And the only thing I remembered from it is that this new Nvlink allows for video cards to share the Vram and effectively double it. Thats why its not on 2070, its new generation of SLI, that will scale much better and have unified VRAM
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
MegaFalloutFan:

None of you watched Nvidias professional presentation a week before? I only watched the ending. And the only thing I remembered from it is that this new Nvlink allows for video cards to share the Vram and effectively double it. Thats why its not on 2070, its new generation of SLI, that will scale much better and have unified VRAM
At slug speed... sharing 1GB of VRAM will kill performance horribly even through nVlink. Get nVlink speed. Calculate time to transfer that 1GB of data to GPU on another PCB and you have amount of time added to each frametime. fps = 1/frametime But since I conveniently used just 1GB of shared memory. 60GB/s link delivers that 1GB 60times per second. That means in 0,016667s. If GPU renders 200fps just from its own VRAM, one frame has rendering time 0,005s. Make it wait for that 1GB of data and your frametime is 0.005s + 0.016667s = 0,021667s. That equals to 46.15fps.
data/avatar/default/avatar16.webp
I will never go sli again , one powerfull card for me . I was tired of crashes and BSOD , single card = almost no problems .