Geforce RTX 4090: Custom graphics card from Lenovo spotted in quad-slot design

Published by

Click here to post a comment for Geforce RTX 4090: Custom graphics card from Lenovo spotted in quad-slot design on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

Personally, I'd like to see real development and change. A single optic fiber would be able to carry 16xPCIe 5.0 worth of data, in theory, but I'm not sure if the heat from the components would be a problem. I guess it would depend on if the fiber comes all the way from the CPU or if there's a mobo socket, which is then connected to the CPU traditionally via copper. Of course in practice there would need to be power delivery as well, worth 75W, or however much the PCIe socket was rated for.
I'm not sure what the benefit would be. Fiber is ideal for transmitting high bandwidth over long distances, or, in electrically noisy environments. It still has to be converted back to copper on both ends. So, it's just added complication and expense for not much benefit. I've been saying for a while now that x16 slots are pretty much obsolete, but I think the reason we're still seeing them is because it's more cost effective to have many "slow" lanes than just a couple of blazing fast lanes. Also, seems like the many lanes are useful for multiple devices to communicate simultaneously. For example, AMD does this for their modern mGPU configurations. As for the power delivery, I've always found the 75W to be a load of crap. There's barely enough 12v pins on the ATX power connector to handle a single 75W card, in addition to everything else that needs 12v (excluding the CPU, of course).
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
barbacot:

Some of my colleges in a lab "played" with this with some success. There is however a problem: it is not practical today because the computer is still an electronic device and it uses electronic logic to perform computation so you need converters...you see the problem here. Also, a lot of computer operations are parallel so you need to replicate this parallelism in lots of optic fibers or again, use converters - not practical. While silicon is still dominant it is not practical and efficient to use fiber optics in computers - best case scenarios would be photonics computer, meaning that the logic can input and output light so no need for converters but we are still far away from adoption of this technology for the masses- can be done but I suspect nobody could buy a pc for home based on this yet - it is not commercially viable...
Yeah, I reckon in practice, for the foreseeable future, the PCIe lanes from the CPU would remain electric, but would be converted to optical near the CPU. However, the components converting the electric signals to light produce heat, but I have no idea how much and if enough to cause problems. It would make the mobo simpler, though, to have a relatively small hub of fiber sockets instead of a row of the huge PCIe sockets. How many fiber PCIe socket a mobo would have, that would depend on the mobo model, naturally.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Kaarme:

Yeah, I reckon in practice, for the foreseeable future, the PCIe lanes from the CPU would remain electric, but would be converted to optical near the CPU. However, the components converting the electric signals to light produce heat, but I have no idea how much and if enough to cause problems. It would make the mobo simpler, though, to have a relatively small hub of fiber sockets instead of a row of the huge PCIe sockets. How many fiber PCIe socket a mobo would have, that would depend on the mobo model, naturally.
The problem is that any time you convert from one signal type to another you lose efficiency. Also from my scarce knowledge of eletrooptics silicon is EXTREMELY inefficient while converting electrical energy in optical energy - that's why led and laser diodes (in pointers for example) are made using other much more expensive compound semiconductors (Ga, As, etc) so as I said as long as silicon is predominant in today hardware it extremely inefficient to use it and if you change to other compounds they are much more expensive so not viable commercially. Back to topic: there is a funny prediction made in 1949: "Computers in the future may weigh no more than 1.5 tons. POPULAR MECHANICS magazine, 1949." The way that graphic cards evolve today and also beefy coolers for the cpu with big steel cases for improved airflow I think that we are getting there.
data/avatar/default/avatar17.webp
What's cringe is seeing a water cooled CPU. 200 watts maybe and then an air cooled 1,500watt GPU.
https://forums.guru3d.com/data/avatars/m/242/242134.jpg
lol for ppl with +2000 for a gpu, but cant buy a case with a rotated board mount, eliminating the need for any (gpu) support. and not everyone buys a full size board for oc. i have one because i get 6 m.2 slots, so no need for sata/power going to storage in the pc, nor do i use/block any pcie slots (space wise). same for cooling, anyone with +2k for a singl pc part, should have the funds for LC it.
data/avatar/default/avatar31.webp
So we can't buy this beauty, it's only for the pre-built system? Damn, what a bummer. It's the best looking graphics card I've ever seen. https://i.imgur.com/qwWBenU.jpg
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
1..2...3...4.................. THIRTEEN FREAKING HEATPIPES!!?????????? Oh man the 600W rumours are looking frighteningly true. I will hard pass if that is true for both NV and AMD. 250W is my new upper limit for GPU.