PCI-SIG Availability of PCIe 7.0 Specification, Version 0.3 - 512 GB/s through x16-slot by 2027

Published by

Click here to post a comment for PCI-SIG Availability of PCIe 7.0 Specification, Version 0.3 - 512 GB/s through x16-slot by 2027 on our message forum
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
My only problem with this is GPU manufacturers diminishing lanes on GPUs. As I have a PCIe3.0 motherboard I'm forced to buy my next GPU with x16 or face big loss of performance. Who actually needs this if not for manufacturers cheap on lanes? ugh!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Silva:

My only problem with this is GPU manufacturers diminishing lanes on GPUs. As I have a PCIe3.0 motherboard I'm forced to buy my next GPU with x16 or face big loss of performance. Who actually needs this if not for manufacturers cheap on lanes? ugh!
Supercomputers I think are where the major bandwidth demands come in, because otherwise a PCIe 5.0 @ x8 lanes seems to be all we really need for a single card, at least for consumer-grade and workstation hardware. Even then, 4.0 @ x8 is plenty. I'm actually a fan of this trend, specifically because of the cost reductions. Fewer lanes not only means fewer traces on the motherboard, but it means a simpler SoC and less EMI (which means board quality can be reduced without sacrificing signal quality). All of that ought to mean cheaper products.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
schmidtbag:

Supercomputers I think are where the major bandwidth demands come in, because otherwise a PCIe 5.0 @ x8 lanes seems to be all we really need for a single card, at least for consumer-grade and workstation hardware. Even then, 4.0 @ x8 is plenty. I'm actually a fan of this trend, specifically because of the cost reductions. Fewer lanes not only means fewer traces on the motherboard, but it means a simpler SoC and less EMI (which means board quality can be reduced without sacrificing signal quality). All of that ought to mean cheaper products.
Well I would not mind my next GPU to be 8x pcix4 and use the other 8x lanes for some extra nvme drives ! Now releasing 8x and 16x cards sound like a nightmare but it seems the GPU boards for now would be better to keep x16 support b... And users on the end will choose ... That said I am pretty sure both Nvidia and AMD will go with the cheapest option
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
nvidia/amd xx60/x600 series on 4 lanes starting 2026, you'll see. even pcie5 owners will see a performance hit.
https://forums.guru3d.com/data/avatars/m/259/259564.jpg
schmidtbag:

Supercomputers I think are where the major bandwidth demands come in, because otherwise a PCIe 5.0 @ x8 lanes seems to be all we really need for a single card, at least for consumer-grade and workstation hardware. Even then, 4.0 @ x8 is plenty. I'm actually a fan of this trend, specifically because of the cost reductions. Fewer lanes not only means fewer traces on the motherboard, but it means a simpler SoC and less EMI (which means board quality can be reduced without sacrificing signal quality). All of that ought to mean cheaper products.
(it wont)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Well I would not mind my next GPU to be 8x pcix4 and use the other 8x lanes for some extra nvme drives ! Now releasing 8x and 16x cards sound like a nightmare but it seems the GPU boards for now would be better to keep x16 support b... And users on the end will choose ... That said I am pretty sure both Nvidia and AMD will go with the cheapest option
We're most likely going to continue to see the trend of physical x16 slots even though electrically they might have fewer lanes. With how big and heavy GPUs are getting, anything to help increase the footprint is helpful. Gets me to wonder why there hasn't yet been a new standard for mounting GPUs to some sort of beam going from the front to the back of the case, so we no longer have to deal with cards that rip out of their sockets or sag a whole centimeter. Some could argue "that'd be ugly!" but any GPU that would need this kind of support is so big that it'd go across the entire chassis anyway, or close to it. Besides... you could add an LED strip to the beam or whatever.
cucaulay malkin:

nvidia/amd xx60/x600 series on 4 lanes starting 2026, you'll see. even pcie5 owners will see a performance hit.
So long as they're not still shipping 8GB cards by then, the performance ought to be fine on PCIe 4.0 @ x4 lanes, especially if these GPUs were to have on-board NVMe access.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

We're most likely going to continue to see the trend of physical x16 slots even though electrically they might have fewer lanes. With how big and heavy GPUs are getting, anything to help increase the footprint is helpful. Gets me to wonder why there hasn't yet been a new standard for mounting GPUs to some sort of beam going from the front to the back of the case, so we no longer have to deal with cards that rip out of their sockets or sag a whole centimeter. Some could argue "that'd be ugly!" but any GPU that would need this kind of support is so big that it'd go across the entire chassis anyway, or close to it. Besides... you could add an LED strip to the beam or whatever. So long as they're not still shipping 8GB cards by then, the performance ought to be fine on PCIe 4.0 @ x4 lanes, especially if these GPUs were to have on-board NVMe access.
by then even 12g will be too low. it's already starting to become a problem for 6700xt.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

by then even 12g will be too low. it's already starting to become a problem for 6700xt.
Well yeah if devs continue to be lazy then that will be the case, but even in situations where 12GB isn't enough, it's close enough where you're not going to see a severe performance penalty. Remember: you only need enough lanes to match the bandwidth of DRAM. If you're completely out of VRAM and saturating 100% of your DRAM bandwidth (which does not require x16 4.0 lanes), either your GPU is woefully underspec'd or the game is horribly designed. Since people keep raising the bar of what an acceptable amount of VRAM is, game devs won't be incentivized to do better. EDIT: For what it's worth, the 6700 XT is a solid 4K no-RT no-AA GPU. By disabling RT an AA, that ought to save a multiple GB of VRAM. It's a surprisingly taxing technology, and I often don't find the visual difference worth the penalty, especially when some methods just make everything look blurry.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

EDIT: For what it's worth, the 6700 XT is a solid 4K no-RT no-AA GPU.
I don't see 6700xt doing 4K at all.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

I don't see 6700xt doing 4K at all.
Look at benchmarks - in most cases it can hover around 60FPS even with AA. Where it falls behind, it's close enough that disabling AA would give the little bit of an edge in performance to remain above 60. Again: this is implying DXR is disabled.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

Look at benchmarks - in most cases it can hover around 60FPS even with AA. Where it falls behind, it's close enough that disabling AA would give the little bit of an edge in performance to remain above 60. Again: this is implying DXR is disabled.
what benchmarks ? I see it dropping into 40s in many new games at 1440p
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

what benchmarks ? I see it dropping into 40s in many new games at 1440p
I'm referring to more recent tests; drivers have matured quite a lot in the past 2 years. For the games where a 6700 XT still falls far behind in 4K (like 30FPS or less), it's kind of a moot point if even a 4090 is unlikely to keep up. A 4090 is barely able to manage 4K @ 60FPS in Cyberpunk, with DLSS 3. So, if ~$1500 and 400W of compute power with frame generation is teetering on inadequate (depending on your definition of adequacy), then you have to reconsider the criteria of what makes something 4K-capable. Since RT+4K isn't going to be happen on mainstream hardware for a looong while, I eliminate RT as a criteria. I also eliminate anything that is egregiously poorly optimized. This is a little harder to rank objectively, but I like to use Doom Eternal as a control metric, since that game looks fantastic and is very well optimized. So, if a game looks worse than DE and performs worse, I don't see that as the GPU being inadequate, because it's more a matter of the devs being lazy. Having said all that, I don't think I'd be recommending a 6700 XT to anyone who wants a no-RT 4K experience, but I would consider it an entry-level 4K-capable card. It should reliably provide playable (as in, 30FPS+) framerates at max detail or very close to it.
https://forums.guru3d.com/data/avatars/m/255/255510.jpg
Silva:

My only problem with this is GPU manufacturers diminishing lanes on GPUs. As I have a PCIe3.0 motherboard I'm forced to buy my next GPU with x16 or face big loss of performance. Who actually needs this if not for manufacturers cheap on lanes? ugh!
Ha. yeah. 😛 So, what is the definition of irony. When your SSD is allocated more PCI-E lanes than your GPU. 🙄
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
schmidtbag:

All of that ought to mean cheaper products.
You do realise that NEVER passes down to consumer, right?
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
I bet by time I build new pc this will be out to make super over price pcie7 gpu from nvidia that super castrated