PCIe Gen5 12VHPWR power connection to have 150W, 300W, 450W, and 600W outputs

Published by

Click here to post a comment for PCIe Gen5 12VHPWR power connection to have 150W, 300W, 450W, and 600W outputs on our message forum
https://forums.guru3d.com/data/avatars/m/178/178348.jpg
"except the problem has existed for years, you've just been ignorant of it" Why? What was the issue? Use a 6 pin or 8 pin depending on power requirement's.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Krizby:

Nah, as long as it's monolithic design, bigger GPU will have almost the same efficiency as smaller chip from the same uarch
Well yeah but I'm referring more to cards where most of their performance gain is just boosted clock speeds, because yeah you're right: the efficiency will scale up mostly proportionately with the die size (as you said, of the same archicture).
Except for Ampere (well due to how inefficient GDDR6X is), biggest chip like the TU102 and Navi21 actually have superior efficiency compare to smaller chip.
That still misses the point. In this context, whether you're overclocking or just adding transistors, it doesn't change the fact that power delivery is being disregarded. I don't necessarily have a problem that there are such power-hungry devices, what I have a problem with is that they're forcing standards to change when it should be the other way around.
Also GPU power consumption is not the whole PC, which is the only thing that matter. Let say a PC with 3080 that use 400W total is 30% faster than one with 3070 that use 300W, can you really say that the PC with 3080 is inefficient? You might as well play with Steam Deck if you worry about power consumption that much.
I don't get your point here.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
H83:

450 and 600w cables just to power up GPUs seems to excessive to me... Companies should be prioritizing energy efficiency, not the other way around.
i seen gauge of psu wires and last thing i would want is 600w being pushed threw them. this even stupid the amount watt usb want to be able to push threw even thinner cable. Industry as whole need MOVE away from More wattage BS
https://forums.guru3d.com/data/avatars/m/163/163068.jpg
Well, if nVidia and AMD go chiplet for GPU designs, they can now scale them like SLI; so this could indicate that that is indeed happening.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
schmidtbag:

I'm well aware. I think such situations are stupid and should be avoided. If you need more than 350W, then you should use additional power connectors. If a single connector for 450W+ systems is really that desirable, then make a new connector. Chip manufacturers should be motivated to attain performance levels within a certain power envelope. If they just say "screw it" or move the goal posts, that doesn't incite innovation. It promotes laziness of the engineers and it's a copout to whichever company claims to have the fastest GPU. A GPU being the fastest isn't impressive when it consumes more power than every other electronic device in the room combined.
Exactly my point of view. Disregarding power usage promotes a brute force approach, and at some point that approach is going to hit a wall, because there´s a power usage limit that GPU makers won´t be able to surpass, unless they expect gamers to use their system with super advanced cooling systems... They need to define a power limit, and build their parts around that limit, in order to extract the best performance possible. A perfect example of this approach is Apple´s M1, it performs like a champ despite it´s tiny power envelope. This should be the inspiration for Intel, AMD and Nvidia.
https://forums.guru3d.com/data/avatars/m/163/163068.jpg
M1 has a lot of fixed function hacks. GPUs have been getting more and more general purpose over the years. Do you want pure speed and efficiency or do you want flexibility and lower efficiency?
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
umeng2002:

M1 has a lot of fixed function hacks. GPUs have been getting more and more general purpose over the years. Do you want pure speed and efficiency or do you want flexibility and lower efficiency?
I think a good balance will be the best approach. Hmmm thinking of amd's acquisition of xinlinx an fpga chiplet or something will be a good aproach .... Programmable at will to fly threw specific tasks and depending what programs you have on adapting.....hmmm sounds plausible in my mind
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
schmidtbag:

That still misses the point. In this context, whether you're overclocking or just adding transistors, it doesn't change the fact that power delivery is being disregarded. I don't necessarily have a problem that there are such power-hungry devices, what I have a problem with is that they're forcing standards to change when it should be the other way around.
Well the current PCIe standards are crap IMO, the split power cable with (6+2) connectors are just unsightly mess that people have to buy custom cables. I'm all in for single power cable, single connector with clear wattage rating. 600W for GPU sounds pretty crazy to me too, but as long as architectural efficiency is maintained and there are innovations in cooling (like the flow through design with RTX 3000 FE), I would say why not. That the nice thing about capitalism, more choices are always better, if a product is stupid it will get phased out eventually.
H83:

Exactly my point of view. Disregarding power usage promotes a brute force approach, and at some point that approach is going to hit a wall, because there´s a power usage limit that GPU makers won´t be able to surpass, unless they expect gamers to use their system with super advanced cooling systems... They need to define a power limit, and build their parts around that limit, in order to extract the best performance possible. A perfect example of this approach is Apple´s M1, it performs like a champ despite it´s tiny power envelope. This should be the inspiration for Intel, AMD and Nvidia.
M1 has great efficiency is because it's made on cutting edge fab (TSMC 5nm), which haven't even been accessible to Intel/AMD/Nvidia, and you are paying dearly for that efficiency. Nvidia Ada Lovelace will use TSMC 5nm too and it will certainly be more efficient than Ampere.
https://forums.guru3d.com/data/avatars/m/94/94245.jpg
Who needs RGB when you already have glowing cables inside? Win-win!
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Stairmand:

Why? What was the issue? Use a 6 pin or 8 pin depending on power requirement's.
🙄 1 connector to hit 450w vs 3 to hit 450w 12+4pin vs 24pin. less points of contact that can fail and meltdown. less wires to route.
tsunami231:

i seen gauge of psu wires and last thing i would want is 600w being pushed threw them. this even stupid the amount watt usb want to be able to push threw even thinner cable.
16awg can handle 600w@12v just fine, its 6 wires carrying 8.3a-+, still nowhere near the maximum for the connector or wire gauge.