Nvidia AD106, A107 Ada Lovelace GPUs Likely to use PCIe x8 Interface

Published by

Click here to post a comment for Nvidia AD106, A107 Ada Lovelace GPUs Likely to use PCIe x8 Interface on our message forum
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
thats an amd problem
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
And it will be nvidia´s problem...
PCI_Doom.png
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
no, it won't.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Does nvidia have a magical way of increasing PCI-e bandwidth?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Horus-Anhur:

Does nvidia have a magical way of increasing PCI-e bandwidth?
In his mind it does expect same issues amd has it will there.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
Horus-Anhur:

Does nvidia have a magical way of increasing PCI-e bandwidth?
Their cards are made with nvidium and it defies reality 😱!
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Just by going from Gen4 to Gen3, both at 16X, can cause performance loses like this. Now imagine if it went as low as 8X Gen3. And the hitching and stuttering would increase by a huge amount.
Untitled.png
https://forums.guru3d.com/data/avatars/m/265/265437.jpg
and amd with radium defies logic ,somehow amd and nvidia complete each other.
data/avatar/default/avatar30.webp
Cheaper to produce, more expensive to sell, Nvidia motto. I know my motherboard is very far from top of the line motherboards of nowadays, one of the aspects is only PCIe 3.0, but still capable of any Ryzen 5000 CPU. And now I will have another limitation that is definitly caused by the PCIe x8. For now I'm not interested in upgrading the GPU in the near future, but I dont understand why limiting people with good AM4 boards PCIe 3.0 that are trying to upgrade the GPUs, and were considering a GPU like 4060. Nvidia and AMD still competing, but is not for performance/price metric, is how to rip off better the costumers.
data/avatar/default/avatar26.webp
Prince Valiant:

Their cards are made with nvidium and it defies reality 😱!
Nvidia probably will try some kind of Nvidium logic as a new color compression algorithm that will mitigate this decrease of bandwith of x8. I still remeber the GTX 960 and the amazing 128-bit memory interface, with Nvidia stating that was plenty because of the new color compression engine. No, it was not good and I made a mistake back then buying that thing.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
tty8k:

The 3090ti has 1008Gb/s bwidth (384bit) , the 4050/4060 are designed for 280Gb/s (128bit)
We are talking about PCI-e bandwidth. Not memory bandwidth.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Horus-Anhur:

No it won't. Especially for someone using PCI-e Gen3. Which there a re many, be it on Intel or AMD systems. ... This kind of move would only be acceptable in low end GPUs.
I agree for modern mid-range cards, dropping to x8 lanes is kinda pushing it for those still on Gen3, but I see it as being a non-issue for Gen4. I really don't get the outrage here. Benchmarks have proven over and over again that so long as you're not out of VRAM, GPUs hardly use any PCIe lanes. The only reason why GPUs like the 6500 XT are so miserable is because it lacks both PCIe lanes and enough VRAM, while being overpriced for such shortcuts to be taken. With DX12 and Vulkan, they use even less than before. Once DirectStorage is implemented, x16 slots will have become totally obsolete. Meanwhile, it seems there have been a lot of issues trying to make PCIe 4.0 both stable and affordable, especially when risers are involved. It's engineering 101: the more complex your design is, the more that can go wrong. The extra lanes are just an unnecessary complication. In any case, I think the real issue is that there will probably not be any real cost difference for the consumer. So, my only gripe with this is we're given an effectively lower-quality part but the savings won't trickle down to us.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
hmm reduce there costs to produce but keep prices high? simply shocking. my MB 3.0x16 i wont be problem cause likely hood my getting new card is slim to expense to much heat/wattage atm these companies are living in bubble imo
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
cucaulay malkin:

scumbag move.
Agree, but you can't imagine how much NVidia with physical X16 use in fact X8, X4 or even X2 only PCIe lane... Also mondial pro standard is still in X8... So i think that for a meddium GPU it's quite ok for the use.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
rl66:

Agree, but you can't imagine how much NVidia with physical X16 use in fact X8, X4 or even X2 only PCIe lane... .
Wat ?
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Horus-Anhur:

Does nvidia have a magical way of increasing PCI-e bandwidth?
https://dl.acm.org/doi/10.1145/3076113.3076122
Horus-Anhur:

Just by going from Gen4 to Gen3, both at 16X, can cause performance loses like this. Now imagine if it went as low as 8X Gen3. And the hitching and stuttering would increase by a huge amount.
Untitled.png
cue the misinformed armchair performance analysts
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Astyanax:

https://dl.acm.org/doi/10.1145/3076113.3076122 cue the misinformed armchair performance analysts
Cool, a set of libraries that can be used in any GPU and even CPU to compress data, that can then be passed trough PCI-e bus. It's not some exclusive technique to nvidia GPUs. Seems like you are the one spreading misinformation, as usual, when it comes to nvidia.
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
Horus-Anhur:

And it will be nvidia´s problem...
PCI_Doom.png
No, it will be a user problem.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Undying:

6600/XT have issues exceeding vram limitations at 4.0 x8. It behaves the same as x4.
That sounds like a settings issue.... Either adjust the settings to stay within the VRAM limit or buy a higher tier card. Budget gamers have been doing this for years....