Intel shows Alder Lake SSD setup passing 28 GB/s PCIe 5.0 / RAID-0

Published by

Click here to post a comment for Intel shows Alder Lake SSD setup passing 28 GB/s PCIe 5.0 / RAID-0 on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
does alder lake actually have 5.0 lanes for the nvme ? I thought it was 20,16+4 in 5.0+4.0 config
data/avatar/default/avatar26.webp
cucaulay malkin:

does alder lake actually have 5.0 lanes for the nvme ? I thought it was 20,16+4 in 5.0+4.0 config
you are right to be cautious because we clearly aren't in the "gg ez" X570 pcie 4.0 configuration where I could plug whatever in the pcie slots and they worked quoting the manual from the asus z690 extreme glacial 😱 you only have x16 pcie 5.0 lanes available so the configs are like this 1) pcie 5.0 slot1 x16 with nothing in pcie slot2 nor M.2_1 2) pcie 5.0 slot1 x8 slot2 x8 with nothing in M.2_1 3) pcie 5.0 slot1 x8 with M.2_1 x4 (the remaining x4 is lost you cannot use the 2nd pcie slot in this config it will be disabled) everything else on this particular motherboard gets it's bandwith from the chipset (some might come from the cpu I didn't really check I don't care) M.2_2 pcie 4.0 M.2_3 pcie 4.0 DIMM M.2_4 + M.2_5 both pcie 4.0 PCIE 3.0 x1 slot (for a soundcard for example) honestly it would have been smarter to slow down speeds to 4.0 and be able to use all pcie slots + M.2_1 together but I can understand you don't sell a sportscar and tell the buyer for a better balance we removed the highest gear so you will never achieve top speed o_O
https://forums.guru3d.com/data/avatars/m/196/196003.jpg
cucaulay malkin:

does alder lake actually have 5.0 lanes for the nvme ? I thought it was 20,16+4 in 5.0+4.0 config
Certain z690 motherboards (like the Poppin', showstoppin' Asus z690 maximus hero) are designed with 2 pci-e gen 5 slots that can run one in x16 mode or bifurcated into x8 for each slot and come with a bundled m.2 expansion card that has at least 1 pci-e gen 5 m.2 on it. Even my prime z690-a has an option to bifurcate the single pci-e 5.0 slot, but assume that's if you have a riser cable or add-in card installed that splits the 16 gen5 lanes into 2 separate x8 connections.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
hedt alders when?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I never thought I'd see the day where RAM might actually be bottlenecking storage. 28GB/s is faster than DDR4 3200. We're honestly reaching a point where RAM is only necessary to store temporary data. But even then, caches are getting so large that RAM is beginning to not be needed for that either. Before anyone gets pedantic and tell me that RAM has always and only been used for temporary data, I'm talking about data that isn't found on a disk. When you run a binary, you're loading the data (whether necessary or not) into RAM simply because RAM is faster, but if the CPU can access the program data directly from the drive and not suffer any performance loss as a result, that would make for a more efficient system. This could have a lot of cool new features too, like being able to run out of RAM without losing stability, since all binaries would be loaded directly and only the temporary data would be in RAM. It could also allow for hibernation without the wait or the use of a large swap/paging file. It could allow for applications to be more easily be cryofrozen, where you can save their state and restore them at any time. It could reduce the cost of a PC, since the average person would likely only need 2-4GB of RAM and you can get by with cheaper storage if you just keep RAID'ing. Exciting times are ahead.
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
schmidtbag:

I never thought I'd see the day where RAM might actually be bottlenecking storage. 28GB/s is faster than DDR4 3200. We're honestly reaching a point where RAM is only necessary to store temporary data. But even then, caches are getting so large that RAM is beginning to not be needed for that either. Before anyone gets pedantic and tell me that RAM has always and only been used for temporary data, I'm talking about data that isn't found on a disk. When you run a binary, you're loading the data (whether necessary or not) into RAM simply because RAM is faster, but if the CPU can access the program data directly from the drive and not suffer any performance loss as a result, that would make for a more efficient system. This could have a lot of cool new features too, like being able to run out of RAM without losing stability, since all binaries would be loaded directly and only the temporary data would be in RAM. It could also allow for hibernation without the wait or the use of a large swap/paging file. It could allow for applications to be more easily be cryofrozen, where you can save their state and restore them at any time. It could reduce the cost of a PC, since the average person would likely only need 2-4GB of RAM and you can get by with cheaper storage if you just keep RAID'ing. Exciting times are ahead.
DDR4's latency is 100-1000x lower than NVMe (50ns vs 20us)
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
schmidtbag:

I never thought I'd see the day where RAM might actually be bottlenecking storage. 28GB/s is faster than DDR4 3200.
That must be the world's slowest 3200 DDR4.. Anyways, RAM will never be the bottleneck. This bench is for a max sequential read speed. I doubt it's much faster than what we have already for small file sizes in which case RAM will be many times faster at accessing.
data/avatar/default/avatar16.webp
meanwhile intel is asking to cut off avx512 support to motherboards OEMs
https://forums.guru3d.com/data/avatars/m/284/284453.jpg
Not having all 20 CPU lanes as PCIe 5.0 was a huge missed opportunity IMO. The RTX 3090 doesn't even fully saturate PCIe 3.0 x16, so 5.0 likely won't make any difference for GPUs even when next gen Lovelace and RDNA3 cards arrive. Hopefully 700 series boards support PCIe 5.0 for the x16 GPU slot AND the top x4 M.2 slot.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
cucaulay malkin:

does alder lake actually have 5.0 lanes for the nvme ? I thought it was 20,16+4 in 5.0+4.0 config
In ideal world it should be as the wiring is the same and the change is on the CPU. But it might be like PCIe 4.0 on AMD B450, i mean it should work but beta bios show that a lot of motherboard can't do it (and so later bios go back to 3.0). On my MSI it was working in 4.0 until i need to upgrade bios for ryzen 5000. So we will see when people would have tried 🙂
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
DrearierSpider:

Not having all 20 CPU lanes as PCIe 5.0 was a huge missed opportunity IMO. The RTX 3090 doesn't even fully saturate PCIe 3.0 x16, so 5.0 likely won't make any difference for GPUs even when next gen Lovelace and RDNA3 cards arrive. Hopefully 700 series boards support PCIe 5.0 for the x16 GPU slot AND the top x4 M.2 slot.
It depend also how the board maker work, sometime Asus, Asrock and Gigabyte are inspitated... And do weird lane dispatching like what they do on server board
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
rl66:

In ideal world it should be as the wiring is the same and the change is on the CPU. But it might be like PCIe 4.0 on AMD B450, i mean it should work but beta bios show that a lot of motherboard can't do it (and so later bios go back to 3.0). On my MSI it was working in 4.0 until i need to upgrade bios for ryzen 5000. So we will see when people would have tried 🙂
b450 has no 4.0 at all
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
cucaulay malkin:

b450 has no 4.0 at all
It has if you have get the beta bios before the Ryzen 3000 series, when AMD have anounced that it might be possible, with this bios some board can had it on PCIe and M2, some only in M2, some only in PCIe, and some just can't (bad speed or bandwith, crash, lot of fun lol). So AMD said "well, in the end it's not possible" and removed it from bios release. i have kept this bios for some time before drop a 5600 on it for use it as media PC and so need a new bios that doesn't have PCIe 4.0. This exemple show that in theory it work, but you can only be sure that it work at release (and sometime being upset).
data/avatar/default/avatar39.webp
schmidtbag:

I never thought I'd see the day where RAM might actually be bottlenecking storage. 28GB/s is faster than DDR4 3200.
Ryzen 1000 with 3200 mem is already into the 40GB/s speeds, so I have had faster performance then that for almost 5 years. Then there is the price difference, these modules are not released yet but my guess is 1000€ per module, compared to 100€ for the memory kit. DDR5 are nearing 100GB/s with plug and play XMP, they are still pricy but they will still be cheaper then raid 0 with 2 of these disks. These is also the latency as Krizby writes.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
rl66:

It has if you have get the beta bios before the Ryzen 3000 series, when AMD have anounced that it might be possible, with this bios some board can had it on PCIe and M2, some only in M2, some only in PCIe, and some just can't (bad speed or bandwith, crash, lot of fun lol). So AMD said "well, in the end it's not possible" and removed it from bios release. i have kept this bios for some time before drop a 5600 on it for use it as media PC and so need a new bios that doesn't have PCIe 4.0. This exemple show that in theory it work, but you can only be sure that it work at release (and sometime being upset).
but z690 doesn't need a beta bios for 5.0
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Agent-A01:

That must be the world's slowest 3200 DDR4..
TLD LARS:

Ryzen 1000 with 3200 mem is already into the 40GB/s speeds, so I have had faster performance then that for almost 5 years.
I think we're each making a mistake here, as I'm referring to single-channel DDR4. Go ahead and check multiple sources, a single channel of DDR4 3200MHz and it definitely isn't 40GB/s+. That being said, I acknowledge most systems have more than just a single channel.
Anyways, RAM will never be the bottleneck.
It literally can be. In a single-channel system, RAID0 another drive and in sequential reads, the RAM will be the bottleneck. Granted, that is a weird situation to be in, to have such a high-performance drive and such low-performance memory. The point is, RAM can be a bottleneck. And if not the RAM, the memory controller itself. Hilbert has shown many times how memory controllers can be a major hindrance to performance.
I doubt it's much faster than what we have already for small file sizes in which case RAM will be many times faster at accessing.
These is also the latency as Krizby writes.
This much is true, but both of you are forgetting that RAM latency isn't 0ns, and because everything must go through RAM to enter/leave the disk, part of the disk's higher latency is actually partially the RAM's latency as well. So if you were to bypass RAM, the disk latency would drop, albeit, only a little bit. But wait! There's more! Right now when you run a program, you're dumping large swaths of data into RAM whether it is needed or not. Ideally, the CPU could just grab only the data it needs directly from the drive, which would often be only a few kilobytes. So, by skipping RAM, less data needs to be handled and there is one less intermediary device. Today's drives will obviously still have a latency problem, but where latency actually matters, that could be loaded into RAM. RAM will never go away, because as I said before, there is stuff stored in RAM that will never end up in permanent storage.
Then there is the price difference, these modules are not released yet but my guess is 1000€ per module, compared to 100€ for the memory kit. DDR5 are nearing 100GB/s with plug and play XMP, they are still pricy but they will still be cheaper then raid 0 with 2 of these disks.
Well yeah but as with most such big leaps in technology, it's going to be expensive. This level of performance isn't going to cost this much forever and it will only continue to speed up. Also if you're going to factor cost into this, the memory kit has a small fraction of the capacity. Good luck getting ~2TB of RAM (even DDR4) for under 1000; you have to factor in the additional cost of a system capable of holding that much RAM too, as an LGA 1700 build sure can't, so you'd have to go bigger. This isn't me nitpicking - if you were to have storage that has performance comparable to RAM that in turn allows you to have less RAM, but has far more capacity than RAM, the value proposition becomes much grayer depending on your workload. DDR5 bandwidth is also showing very little improvement in real-world tests in most (not all) cases. Where DDR5 has practicality is iGPUs and the ever-growing core count in servers. Desktop PCs will not see much benefit in DDR5 for several years, as a lot of optimization is still necessary. I say this because even though storage isn't likely to bottleneck RAM in a modern enthusiast PC, it doesn't really matter how much bandwidth your RAM has nothing demands it.
data/avatar/default/avatar26.webp
schmidtbag:

I think we're each making a mistake here, as I'm referring to single-channel DDR4
They will have around the same performance if 2 drives in raid 0, is compared with a shitty speced Ryzen 5600H laptop with single channel memory, yes but the laptop will probably cost half of what the 2 drives cost, and then comes the requirement of min. 3 PCI X slots if you want to make room for a GPU also. Threadripper, Intel 2066 and 1700 are knocking on the 100GB/s, so you would still need 6-8 of these drives in raid 0 to match the memory speed. I would advise to run mirror raid for data security, because more then 2 drives in raid 0 sounds a bit scary to me, so that would potentially be 12-16 drives. I would like to see if these drives are able to keep the speed up over the entire drive, this could still be drive buffer numbers, so real world performance could be much slower. These drives will become cheaper and cheaper, but so will memory, I do not see why these drives should suddenly overtake memory in size/price. Realistically memory speed will always be at least twice the speed as these drives in a practical realistic system, so the CPU would get data twice as fast, if it is able to access memory and harddrive at the same time. So removing memory will still half the bandwidth the CPU is able to read and write to. There will be situations where these drives are fast enough to replace memory, but why use expensive storage if cheap storage with a fast buffer is good enough for almost everything?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
TLD LARS:

but the laptop will probably cost half of what the 2 drives cost, and then comes the requirement of min. 3 PCI X slots if you want to make room for a GPU also.
Again, you're kinda missing the point here. By your logic, that's like asking why would anyone buy a Ferrari when the average pickup truck has more power at a small fraction of the cost. The power isn't used the same way. The storage that the laptop comes with is probably some bottom-bin 256GB (if even that high) drive with mediocre performance. That laptop isn't meant to be a workhorse, it's just meant to do everyday tasks. The only people who buy crazy-fast SSDs are those who either have money to burn for bragging rights, or, people who are handling oodles of data at high speeds and need to be able to manage it. Soooo, if you already have a need for such storage, why not use that level of performance to bypass RAM. Keep in mind that reading data directly from the disk while also having RAM would basically just act as having another whole DDR4 channel, so, overall performance in some cases may improve.
Threadripper, Intel 2066 and 1700 are knocking on the 100GB/s, so you would still need 6-8 of these drives in raid 0 to match the memory speed.
Not only is that attainable, but as I alluded to before, this is just the beginning. Drives are only going to continue to get faster. RAM will continue to get faster too, but like I said before, there's a point where the extra bandwidth doesn't really matter.
I would like to see if these drives are able to keep the speed up over the entire drive, this could still be drive buffer numbers, so real world performance could be much slower.
Yes, that had occurred to me.
These drives will become cheaper and cheaper, but so will memory, I do not see why these drives should suddenly overtake memory in size/price.
If you weren't aware, RAM isn't permanent storage, and is far more expensive per-byte. SSDs have overtaken memory in size and price for a long while now. The whole point here is that SSDs are rapidly catching up to RAM levels of performance. They're not quite there but it won't be too long until they are, hence my original post.
Realistically memory speed will always be at least twice the speed as these drives in a practical realistic system, so the CPU would get data twice as fast, if it is able to access memory and harddrive at the same time. So removing memory will still half the bandwidth the CPU is able to read and write to.
I never said anything about removing memory outright. I'm not even suggesting to remove memory channels. I'm saying that since storage is approach the speeds of a single DDR4 DIMM, it can basically act as another memory channel, where instructions from permanent storage can be read directly. If the storage can start to approach a bandwidth that keeps up with the demand of software, you are overall losing performance (but most of all, efficiency) by loading data from disk into RAM. As I've said multiple times, real-world programs don't tend to demand that much bandwidth, and you need a whole lot less bandwidth when you're only taking what you need and not dumping the entire file.
There will be situations where these drives are fast enough to replace memory, but why use expensive storage if cheap storage with a fast buffer is good enough for almost everything?
Where did I ever say otherwise? Considering the price of these drives, it's implied that such a system would only be done for high-end workstations or enthusiast builds. In some cases, what is available isn't "good enough", which is exactly why there was a push for DDR5.
data/avatar/default/avatar23.webp
schmidtbag:

Again, you're kinda missing the point here.
The write speed is 6600MB/s peak per drive, that is DDR 2 speeds. If I buy one more of my Samsung 970 from 2019 and raid them, I should be nearing 7000 read and 6600 write peak speeds, so already close to the write speed atleast. The price of the old generation PM1733 is already 1000€ for the close to 4tb version. The new gen is twice as fast, so 1500€ is a fairly realistic price guess. 3000€ for 2 drives is the same price as 64GB DDR4 memory and a founder RTX3090 (if you are lucky enough to find one), that would be a much more balanced hobby work pc.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
TLD LARS:

The write speed is 6600MB/s peak per drive, that is DDR 2 speeds.
What's your point? If the CPU is reading instructions directly from the drive, then the write speed is irrelevant. The only data that would be modified/written is temporary data (like variables or object), which as I've already stated, would still be stored in RAM. The whole point of the idea is that unchanging/static data that is already on a disk (like a compiled binary) would be read directly from the drive by the CPU, bypassing RAM entirely. Such instructions/information doesn't change while they're in RAM. It wastes time because the whole file is loaded into RAM whether it is needed or not. The only reason RAM exists is because storage has always been too slow to feed a CPU instructions, so, it's easier to just dump the whole thing into RAM and then the CPU doesn't have to wait for anything. But if read speeds are comparable to RAM and the instructions you're giving the CPU are static, then you don't need RAM to run your programs. To reiterate: RAM would still be needed.
The price of the old generation PM1733 is already 1000€ for the close to 4tb version. The new gen is twice as fast, so 1500€ is a fairly realistic price guess. 3000€ for 2 drives is the same price as 64GB DDR4 memory and a founder RTX3090 (if you are lucky enough to find one), that would be a much more balanced hobby work pc.
Again, not sure what your point is, since hobby work PCs aren't part of the discussion. Computers currently aren't built to bypass RAM in this way, so conjuring up hypotheticals of what the average hobbyist today is doing is rather moot. All new technologies take a lot of time, and money. I never said this was going to be something that you and I were going to be doing any time soon, but just that we are now approaching storage speeds that may reshape the way we build computers. This isn't a new concept, as ReRAM (which is a different technology than just RAID'ing flash storage) has already paved the way for such ideas.