PCI-Express 4.0 to Double Bandwith and Allows for 300W Slot Power

Published by

Click here to post a comment for PCI-Express 4.0 to Double Bandwith and Allows for 300W Slot Power on our message forum
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
Yay maybe now we can get rid of the stupid connectors that clutter the internals and avoid the whole rx 480 thing
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Is such a high bandwidth really necessary? Even a 2.0 is not bottlenecking any card. I would understand if multi-gpu is the future but users seems moving away from that.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Is such a high bandwidth really necessary? Even a 2.0 is not bottlenecking any card. I would understand if multi-gpu is the future but users seems moving away from that.
For gaming? No. Compute applications in multi-GPU setups? Yes. It's the reason why Nvidia developed NVLink -- that plus latency. Honestly, not having power cables is pretty nice feature in itself
https://forums.guru3d.com/data/avatars/m/208/208807.jpg
>300w So bye bye pci-e power ?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
>300w So bye bye pci-e power ?
Unless you want a beast that pulls more than that. 😀
data/avatar/default/avatar15.webp
Is such a high bandwidth really necessary? Even a 2.0 is not bottlenecking any card. I would understand if multi-gpu is the future but users seems moving away from that.
For gaming purposes PCI-E 2.0 is still more than sufficient but for Compute purposes more bandwidth is required. It may be surprising to know that even a HD 7970 had higher compute performance when running on PCI-E 3.0 compared to running on PCI-E 2.0. http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/10
https://forums.guru3d.com/data/avatars/m/222/222136.jpg
Yeah happy with my mobo/cpu - rock solid stability and performance, for gaming obvs.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Is such a high bandwidth really necessary? Even a 2.0 is not bottlenecking any card. I would understand if multi-gpu is the future but users seems moving away from that.
PCI-E SSD I guess? 😀 (The M.2 or what it's called featured on more recent motherboards might be better though and not you know block other components relying on PCI-E slots but eh I guess newer motherboards offer a bit more choice what with more lanes or how it's called and better spacing for the larger board models.)
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Am I missing the slide that mentions about the 300W via the slot ? I don't see it in the collection ?
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
AMD was really just ahead of the times with RX480!
https://forums.guru3d.com/data/avatars/m/93/93080.jpg
AMD was really just ahead of the times with RX480!
Yeah....and AMD is way ahead with the Phenom II.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
After reading the two originating sites, Tom's has the speculation that the slot will be able to deliver at least 300W. Some guy told him this, there is no slide. He is not understanding it correctly. you will NEVER get 300W through traces on the motherboard. A nightmare scenario for no gain at all. Currently, the PCIe Spec is one 8 pin, and one 6 pin (officially). That, with the 75W from the slot, makes 300W. The Spec is officially 225W though, anything above, is still within spec, and in a sort of acceptable no-mans-land (Since the Nvidia 480 iirc). Back in the Fermi days, Nvidia wanted 300W triple slot coolers, so did the AIB vendors. This just means that we will be able to see cards officially have two 8-pin connectors, or more (not just custom cards). Say hello to dual GPU cards of the future 🙂 edit : from the wiki page (the PCI-SIG docs appear to be behind a login/password) : [Quote] Optional connectors add 75 W (6-pin) or 150 W (8-pin) power for up to 300 W total (2×75 W + 1×150 W). Some cards are using two 8-pin connectors, but this has not been standardized yet, therefore such cards must not carry the official PCI Express logo. This configuration would allow 375 W total (1×75 W + 2×150 W) and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
He is not understanding it correctly. you will NEVER get 300W through traces on the motherboard. A nightmare scenario for no gain at all. This just means that we will be able to see cards officially have two 8-pin connectors, or more (not just custom cards). Say hello to dual GPU cards of the future 🙂
Yeah, I had a feeling there was no way the motherboard itself would provide 300W. Besides, one of the benefits of PCIe is backward compatibility. If a new GPU were designed to take advantage of 300W, creating the additional power connectors wouldn't be necessary, but that would also mean the GPU would be required to work on gen 4.0 boards. Y'know what I wish they'd fix? How the bandwidth is divided. For example with many motherboards, you might have three 16x slots, but if you utilize all of them they might operate at 8x. As far as I'm concerned, that implies that half the lanes are disabled, and therefore half the bandwidth. What I'd much rather see is the PCIe bus get degraded. So instead of each slot operating at 3.0 speeds at 8x, they should all operate at 2.0 speeds at 16x. I figure that would offer more performance without over-working the PCIe bus, particularly when you consider backward compatibility. In other words, if there's a PCIe 2.0 GPU that can actually take advantage of more than 8 lanes, wouldn't it's performance be crippled in a 3.0 slot running with 8 lanes? To my knowledge, GPUs can't operate faster per-lane than the generation of PCIe they were designed for.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Yeah, I had a feeling there was no way the motherboard itself would provide 300W. Besides, one of the benefits of PCIe is backward compatibility. If a new GPU were designed to take advantage of 300W, creating the additional power connectors wouldn't be necessary, but that would also mean the GPU would be required to work on gen 4.0 boards. Y'know what I wish they'd fix? How the bandwidth is divided. For example with many motherboards, you might have three 16x slots, but if you utilize all of them they might operate at 8x. As far as I'm concerned, that implies that half the lanes are disabled, and therefore half the bandwidth. What I'd much rather see is the PCIe bus get degraded. So instead of each slot operating at 3.0 speeds at 8x, they should all operate at 2.0 speeds at 16x. I figure that would offer more performance without over-working the PCIe bus, particularly when you consider backward compatibility. In other words, if there's a PCIe 2.0 GPU that can actually take advantage of more than 8 lanes, wouldn't it's performance be crippled in a 3.0 slot running with 8 lanes? To my knowledge, GPUs can't operate faster per-lane than the generation of PCIe they were designed for.
Downgrading the slot to 2.0 spec would be very hard to do. I think you would basically need a chip/Core for each type, so you would have to switch the whole board to 2.0 spec. It would create havok for multiple cards, one PCIe 3.0 and one 2.0....what does it do ? 🙂 Too many headaches. I'm waiting for the day we get a full compliment of PCIe x16 slots onboard, without the use of PCIe switches, or disabling stuff on the Mobo...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Downgrading the slot to 2.0 spec would be very hard to do. I think you would basically need a chip/Core for each type, so you would have to switch the whole board to 2.0 spec. It would create havok for multiple cards, one PCIe 3.0 and one 2.0....what does it do ? 🙂 Too many headaches. I'm waiting for the day we get a full compliment of PCIe x16 slots onboard, without the use of PCIe switches, or disabling stuff on the Mobo...
Maybe so, but I don't see how it's really any different. With the method done down, it seems like it just disconnects half the lanes and re-labels the slot as 8x. With the method I suggest, it cuts the frequency in half and re-labels the slots as 2.0 (or I guess 2.1 makes more sense). But you could easily be right - I have no idea the level of effort or detail that is put into this. It just seems to me that mathematically, the total bandwidth remains the same. But yeah, a full set of 16x slots would be pretty nice one day.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Maybe so, but I don't see how it's really any different. With the method done down, it seems like it just disconnects half the lanes and re-labels the slot as 8x. With the method I suggest, it cuts the frequency in half and re-labels the slots as 2.0 (or I guess 2.1 makes more sense). But you could easily be right - I have no idea the level of effort or detail that is put into this. It just seems to me that mathematically, the total bandwidth remains the same. But yeah, a full set of 16x slots would be pretty nice one day.
Basically, the slot does downgrade to PCIe spec 2.0. that what it does today (if you put a 2.0 card into a 3.0 slot). the reason for the PCIe dropping to 8x on todays mobos, is because the CPU only gives 16-20 lanes in total. One card gets 16, two have to share, so 8x each. you can't really split 16x 3.0 lanes into 32x2.0 lanes. Thats what the PCIe splitters are for 🙂
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Basically, the slot does downgrade to PCIe spec 2.0. that what it does today (if you put a 2.0 card into a 3.0 slot). the reason for the PCIe dropping to 8x on todays mobos, is because the CPU only gives 16-20 lanes in total. One card gets 16, two have to share, so 8x each. you can't really split 16x 3.0 lanes into 32x2.0 lanes. Thats what the PCIe splitters are for 🙂
I think the best way to look at is is like a USB hub. You can make a single USB 2.0 port act like a dozen 1.1 ports. The hub could be designed so you get to operate only one port at a time at full 2.0 speeds, but I would much rather each of the devices get slowed down and utilize all of them simultaneously. I understand that CPUs/northbridges have a finite amount of lanes, but what I'm suggesting is couldn't it be possible to slow down each lane in order to have more of them? There are motherboards out there that support both PCIe 3.0 and 2.x lanes, so my idea couldn't be totally impossible. And yeah, I do understand my analogy might not be perfect, since USB operates at a much higher and more abstracted level than PCIe. Anyway thanks for keeping the discussion friendly. Just to clarify - I'm not trying to be antagonistic, just making conversation. I may look into the lane splitter some day, I wasn't really aware that existed (I haven't really needed it, either).
data/avatar/default/avatar05.webp
You can´t have this for free. If power will be taken from atx connector it will break the specs for most current psu´s Unless new motherboards will have 8 pin pcie connectors.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
I'm not sure how I feel about pumping so much power through the MB. I'll definitely be avoiding this for a time while manufacturers work out the initial problems.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
With this amount of bandwidth, things like HSA start to become more "real". Also there is no way that the power thing is correct. Motherboards, unlike PSUs, don't really have ratings for things, and it would raise the prices of the cheaper models for no reason at all.