Micron Starts Volume Production of 1z nm DRAM - 16 Gigabytes of RAM in a single package

Published by

Click here to post a comment for Micron Starts Volume Production of 1z nm DRAM - 16 Gigabytes of RAM in a single package on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
With such advances in memory, I personally would really like to see nothing but SO-DIMMs for DDR5. We're reaching at a point where these full-length DIMMs are unnecessary. They just needlessly take up more space on the motherboard. I'm actually a little surprised server motherboards haven't started to switch to ECC SO-DIMMs, now that we're getting up to 16 channels of memory on a single board.
data/avatar/default/avatar23.webp
I think it is 16Gb = 16 GigaBits, not 16 GigaBytes @hh
https://forums.guru3d.com/data/avatars/m/265/265068.jpg
schmidtbag:

With such advances in memory, I personally would really like to see nothing but SO-DIMMs for DDR5. We're reaching at a point where these full-length DIMMs are unnecessary. They just needlessly take up more space on the motherboard. I'm actually a little surprised server motherboards haven't started to switch to ECC SO-DIMMs, now that we're getting up to 16 channels of memory on a single board.
Nah, I think it would be cooler if RAM modules tripled in length and height. More surface space for LEDs & RGB. Mwahahahahaha!!
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
angelgraves13:

I think it’s time that GPUs got a socket on a motherboard instead of this dedicated card taking up slots. It would be easier to cool just like a cpu
no point, you'd change motherboard every gpu generation, and no it wouldn't be easier to cool.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
thats some technical ignorance on your part. 384 bit gpu's need more pins than 256bit gpu's for one. then theres physical feature addition, for example turing added additional pins for the usbc port and power input for that.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
over simplification is only something someone who doesn't "know" would do and then pull the "are you going to argue X/Y" MXM is a thing, and even those are not compatible with anything but the notebook they were designed for.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
There are many reasons for not having GPU sockets too many things change from generation to generation for that to be possible also
angelgraves13:

Are you really going to argue over specs of a non existent socket? They both access the same pci express cpu lanes.
this doesn't make sense what u just said is we may as well just keep using the pci-e socket which is the the current standard????
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
schmidtbag:

With such advances in memory, I personally would really like to see nothing but SO-DIMMs for DDR5. We're reaching at a point where these full-length DIMMs are unnecessary. They just needlessly take up more space on the motherboard. I'm actually a little surprised server motherboards haven't started to switch to ECC SO-DIMMs, now that we're getting up to 16 channels of memory on a single board.
thats like saying "thanks to smaller bezels, now big bezel 50" screens are unnecessarily big" yeah sure, but now you can fit a 60" screen on that old big bezel 50" footprint, and so it goes so your phone now has 32gb ram? great, means now that you're the mainstream, i need to quadruple that to be worry free on the other hand, SODIMM only but full ATX can have 8 DIMMs? i guess that way we all get what we want
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
angelgraves13:

But the socket would remain the same. Yes it would be easier to cool. AIO closed loop with a radiator.
You would run into massive memory bandwidth issues. This is why we have add in boards, they have custom memory chips and memory channels. Not to mention the cards have their own power delivery. All of this would now have to move to the motherboard it self. Which would take up too much room and basically get in the way of other PCIE slots, the chipset, sata, and sound chips. Adding in another socket for a GPU would basically completely get rid of ITX and probably even mATX boards.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
EspHack:

thats like saying "thanks to smaller bezels, now big bezel 50" screens are unnecessarily big" yeah sure, but now you can fit a 60" screen on that old big bezel 50" footprint, and so it goes
And? You say that like it's a bad thing. If you can fit more of what you bought in the same amount of physical space with no negative side effects, what exactly are you complaining about? If you can fit more pixels in the same footprint, why wouldn't you? If you can fit more memory in the same footprint, why wouldn't you? My point is full-size DIMMs are physically larger than they need to be.
so your phone now has 32gb ram? great, means now that you're the mainstream, i need to quadruple that to be worry free
Huh? Not sure what you're getting at here... Phones don't use DIMMs, and just because something becomes mainstream, that doesn't mean you have worry about out-pacing it; that's why it's called mainstream and not cutting-edge. Mainstream standards aren't meant to be obsoleted so easily. Besides, RAM usage for day to day applications hasn't changed much in years. The average user can still easily get by on 8GB, even with Chrome. If you aren't much of a multitasker, 4GB is still enough.
data/avatar/default/avatar13.webp
SweenJM:

It would seem a logical step, and yet they haven't done it yet...i wonder why.
SO-DIMM mounting is limiting. They are typically mounted side-ways in laptops due to height constraints, which means if you have two slots, they would overlap each other. This layout is very prohibitive and would probably be very impractical for more then 2 SO-DIMM slots. This side-ways layout is also not ideal for heat dissipation on higher-end modules, as air flow would be rather restricted. And mounting the same modules standing up would be problematic due to height constraints with CPU coolers, since SO-DIMM modules will offset the lack of width by more height. The standard DIMM slots are quite fitting for desktop or server systems, since the width is not a problem and they easily scale in number or slots and keep a low height profile to no obstruct other components.
data/avatar/default/avatar34.webp
angelgraves13:

I think it’s time that GPUs got a socket on a motherboard instead of this dedicated card taking up slots.
I've been thinking about similar issues for a while and frankly I feel the solution is the opposite way around, ie. mounting the CPU and memory slots on a daughter board. Coming from the Amiga that was the solution that allowed relatively static hardware to accept ever newer upgrades. Today, in comparison, the modular PC we all love has been hampered by developments in processor and memory standards while the base interfacing technology has been standardized and stagnant. The GPU is actually using a better solution, allowing for any physical design of the chips and having local memory connected to the system through a standardized interface rather than suffering from staggered generation upgrades (CPU sockets, memory interfaces). If we assume that current trends with tying memory ever closer to the processing cores to maximize efficiency, whether it's something like HBM, on-package or on-die solutions it would only get easier to implement on a daughter board. *shrug* I mean it would to a large extent murder the current business model of motherboard manufacturers and chipmakers both, as well as requiring a genuine generational shift away from ATX, so I can see why it wouldn't happen but in my mind it's the CPU attachment mechanism of current motherboards that's flawed.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
SweenJM:

What would be involved in the change from dimm to so-dimm for non-laptops? Obviously the slots themselves, and the wiring for them (i guess the pin count from ddr4 to 5 isnt going to change for so-dimm or dimm)...but are there any other considerations for making ddr5 all so-dimm? Any performance considerations, or challenges to overcome with the manufacture of the motherboards? It would seem a logical step, and yet they haven't done it yet...i wonder why.
Well as of right now, there aren't really any high-performance SO-DIMMs, but, that's mostly because there just simply isn't a market for them. Servers care about ECC (which never clocks high) and laptops don't have the power, thermals, or ability to OC to get high speeds. But otherwise, SO-DIMMS for the most part have much of the same capacities as their DIMM counterparts, they can be slotted the same way, and as far as I'm aware they're not really electrically different in any substantial way. There are already desktop boards with SO-DIMM slots, they're just rare. I know ASRock uses them for some of their HEDT mATX and ITX boards, for example. There are also a few "ultra-slim" ITX boards that use them too.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
schmidtbag:

My point is full-size DIMMs are physically larger than they need to be.
my point is that this statement is outright insane, unless you meant replacing DIMMs with double the SODIMMs slots, which sounds pretty good imo
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
EspHack:

my point is that this statement is outright insane, unless you meant replacing DIMMs with double the SODIMMs slots, which sounds pretty good imo
SO-DIMMS can have the same clock speed, latency, and capacity as DIMMs (they often don't, but there's not much preventing them from doing so). They can be buffered or have ECC. They're just simply smaller; that's it. So - mind explaining what's so insane about having something smaller with the same capabilities, especially if you're not making any sacrifices? Full-size DIMMS just simply take up more space on a motherboard with no inherent advantages, other than maybe offering higher capacities (depending on existing IC density). What's so insane about having everything you want and need in a smaller package, with no deficits?