Phison: Enthusiast PCIe 5.0 SSDs Will Require Active Cooling

Published by

Click here to post a comment for Phison: Enthusiast PCIe 5.0 SSDs Will Require Active Cooling on our message forum
https://forums.guru3d.com/data/avatars/m/72/72830.jpg
Sigh. Suppose it's 15w max. A decent sized heatsink should actually have no trouble with it, just start copying the old northbridge cooler designs..meaning make the heatsinks taller.
data/avatar/default/avatar11.webp
Should be fine with proper heatsinks, that affects motherboard design (some have nvme slots in places that doesn't fit much on top) more than anything else. And/or favours a trend towards higher capacity drives instead of stacking a bunch of 1TB cards. PCIe to NVMe extension cards (x16 to 4x NVMe) like the ASUS Hyper ones already had a fan since PCIe 3.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
My Gen 4 500 GB 980 Pro Samsung boot NVMe sits right under my GPU and can do everything I need done, including a full AV Defender self-scan without the need of a heatsink, running always at PCIe4 speeds. My EVO 960 250GB, however, also sans a heatsink, cannot complete an AV Defender self-scan without choking and failing. The scan just stops and is aborted because of temps. Interesting thing is that the 980 runs at PCIe4; the 960 is limited to PCIe3, only. So I guess a lot depends on how the NVMe drive is made, wouldn't you say? I've tried to make the 980 choke because of operational temps--can't do it. I was surprised about this, myself! I read something somewhere about why this situation exists relative to the 980 contrasted with the 960, but of course--just in time for this thread--I've forgotten what I read...;) "Sure you did, Walt. Sure you did."
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Mineria:

Around and below 1 second load, which still is less interesting than knowing what SD can do in regards to continues streaming that causes stuttering, and even then it needs to be optimized by game developers to really shine. With the right compression used it might even take some time before there is any difference that matters between current and next gen PCIe. Somehow I think that even Windows needs more than SD, since there is little difference between SATA and NVME with games up running, some games even behave worse when on PCIe gen 4 NVME's. Main advantage using PCIe 5 drives will be for major workload when working with raw images, video and heavy storage operations.
ab-so-freaking-lutely. i really only see this (as a need) for production companies and professional users of raw files like cinematographers, photographers, special effect houses, animators, AI - and military applications. right now the entire music industry doesn't need more than PCIe 3.0, but even with large uncompressed files you still have a smaller file than raw 4k video, let alone 8k so no need is met except for gear heads.
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
Why bother using it at that point. Time to water cool our NVMEs.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
sigh just wrong direction more heat more wattage, Nvme in in no way ready masses imo, still to expensive like normal ssd to much heat now more heat added to mb we already have cpu and memory heat which can get rather hot, now storage heat too. Sata SSD is where i am staying till prices and HEAT and capacity of said drive come back to earth
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
waltc3:

My Gen 4 500 GB 980 Pro Samsung boot NVMe sits right under my GPU and can do everything I need done, including a full AV Defender self-scan without the need of a heatsink, running always at PCIe4 speeds. My EVO 960 250GB, however, also sans a heatsink, cannot complete an AV Defender self-scan without choking and failing. The scan just stops and is aborted because of temps. Interesting thing is that the 980 runs at PCIe4; the 960 is limited to PCIe3, only. So I guess a lot depends on how the NVMe drive is made, wouldn't you say?
Also the bin of the silicon. A poor bin will run hotter. And if it has got very hot, new routes for current flow to ground will be generated in the silicon making it run even hotter. This happening is normal and usually happens a lot slower but high heat speeds it up significantly. They throttle to try and prevent this happening too fast.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
tsunami231:

sigh just wrong direction more heat more wattage, Nvme in in no way ready masses imo, still to expensive like normal ssd to much heat now more heat added to mb we already have cpu and memory heat which can get rather hot, now storage heat too. Sata SSD is where i am staying till prices and HEAT and capacity of said drive come back to earth
One problem is the size of the chip as they are made with ever smaller nodes. Previously there was enough die area to passive cool without a heatsink but much smaller area cannot cool without help. It doesnt help that they push modern chips harder to get higher operating frequency without using better silicon, moving up the exponential power/freq curve.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Mufflore:

One problem is the size of the chip as they are made with ever smaller nodes. Previously there was enough die area to passive cool without a heatsink but much smaller area cannot cool without help. It doesnt help that they push modern chips harder to get higher operating frequency without using better silicon, moving up the exponential power/freq curve.
and using 3D manufacturing. more heat is trapped on the chip as the semiconducting layers are not good at heat exchange even though you are getting more cells per sq/mm tlc is way easier to cool than 3D nand
https://forums.guru3d.com/data/avatars/m/288/288404.jpg
This problem seems entirely to do with M.2 being chosen over U.2 2.5" for consumer platforms. Enterprise SSDs can pull more power than upcoming client M.2 ones, but since they have proper heat dissipation there aren't any issues. There are some forthcoming EDSFF drives that are supposed to be the best of both worlds, but I'm not sure they could be adapted for consumer use. At the very least I'd like to see the M.2 slots moved away from being sandwiched between CPU and GPU. I currently use a 2.5" M.2 to U.2 adapter however I don't see the likelihood of future consumer level boards having U.2 ports.