MSI PCI-E 4.0 Motherboards For Speed Ascending

Published by

Click here to post a comment for MSI PCI-E 4.0 Motherboards For Speed Ascending on our message forum
https://forums.guru3d.com/data/avatars/m/279/279306.jpg
I keep forgetting my MSI motherboard have one of those add-on card and that I should actually use it. ^^ After seeing those results maybe I should try get a couple PCI4 gen NVMe next year anyhow since I do use Vegas Pro often. Not sure if I will raid them thou since I could use the space. :>
data/avatar/default/avatar17.webp
raid 0 does not loose space it only speeds up and all data is gone if one ssd breaks so external backup is adviced
data/avatar/default/avatar08.webp
You know what impresses me the most about those RAID 0 results? Its the fact that 4KQ1T1 was not impacted all that much on the read side and actually improved on the write side.
Lily GFX:

I keep forgetting my MSI motherboard have one of those add-on card and that I should actually use it. ^^ After seeing those results maybe I should try get a couple PCI4 gen NVMe next year anyhow since I do use Vegas Pro often. Not sure if I will raid them thou since I could use the space. :>
Asus as well. Here are 4 905P in RAID 0 via Asus's 4X card: https://i.imgur.com/8sCtdLb.jpg Things are going to get very interesting once we have some reasonably priced PCIe 4 SSDs with next gen random read/write. Sequential speeds have exploded sine the 550 we had with SATA III but random read/write has not enjoyed the same growth at least in NAND world.
data/avatar/default/avatar06.webp
I just took a quick look around it looks like the best scores for the 860 Evo in CDM are 560 for sequential read and 48 for 4KQ1T1 read. Contrast that with the 980 and the best you see ins 7150 for sequential read and 88 for 4KQ1T1 read. This means that we have achieved a 12.75X in sequential read performance while only a 1.83X in 4KQ1T1 read performance. Even a single 905P Optane drive with an OCed CPU and mitigations disabled (both important to max out Optane) only gets you to 280 4KQ1T1 read, a 5.83X over SATA.
https://forums.guru3d.com/data/avatars/m/281/281256.jpg
I have neither the need nor will I ever have the need for this kind of speed.......but still I WANT IT! 🙂
https://forums.guru3d.com/data/avatars/m/279/279306.jpg
Wanya:

raid 0 does not loose space it only speeds up and all data is gone if one ssd breaks so external backup is adviced
Sorry my mistake you are right, I was thinking raid 1^^
nosirrahx:

You know what impresses me the most about those RAID 0 results? Its the fact that 4KQ1T1 was not impacted all that much on the read side and actually improved on the write side. Asus as well. Here are 4 905P in RAID 0 via Asus's 4X card: https://i.imgur.com/8sCtdLb.jpg Things are going to get very interesting once we have some reasonably priced PCIe 4 SSDs with next gen random read/write. Sequential speeds have exploded sine the 550 we had with SATA III but random read/write has not enjoyed the same growth at least in NAND world.
Very nice result there^^ Kinda hope reasonably priced PCIe 4 SSD/NVMe will appear soon too and you are right about the random read/write I also hope those will improve soon. I think AMD still have some work to do with their raid solution, but Intel do this very well already. Hopefully they are more even there too soon. Maybe we will see more improvemet on random read/write when they are close to same performance on everything. ^^
https://forums.guru3d.com/data/avatars/m/283/283103.jpg
Hmm some pretty good speeds there.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Lily GFX:

I think AMD still have some work to do with their raid solution, but Intel do this very well already. Hopefully they are more even there too soon. Maybe we will see more improvemet on random read/write when they are close to same performance on everything. ^^
Go with a SATA boot drive and then opt for software RAID for your workloads. Firmware RAID is often pretty crappy, over-complicated, and not easily portable if you need to replace your system. Really, the only reason for firmware RAID these days is if you intend to boot Windows (and specifically Windows) from it. Going NVMe doesn't make a drastic difference for loading software either, which is why I suggest you cheap out on SATA for booting. Though if you really want to squeeze every last bit of performance into your system, you could do a separate NVMe drive for booting too. As a bit of a tangent, hardware RAID also doesn't seem especially useful in workstations anymore either. CPUs via software RAID are fast enough to pretty much obsolete them.
https://forums.guru3d.com/data/avatars/m/265/265196.jpg
A clever drive and file arrangement and, as a result, a backup that is much easier to organize is generally a must. To think that a Raid0 is more dangerous than a single drive is theoretical nonsense in my opinion. I started to use 'mainboard controlled' Raid0 in my all in one workstations a long time ago (even as system-drive with multiple system-partitions) in addition to various other internal and external drives.. 2 x 256 GB HDD's -> 2 x512 GB HDD's -> 2 x 2TB HDD's -> + 2 x 512 GB SSD's. All drives was working far more than 50 000 hours. For single SSDs with two drives in a Raid0 for example, it even means writing only half of the data. (theoretically) In my mind the most Windows systems are that bad (mainstreamish) configurated that this and a Raid0 does not matter anyway. 😉
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Freitlein:

A clever drive and file arrangement and, as a result, a backup that is much easier to organize is generally a must. To think that a Raid0 is more dangerous than a single drive is theoretical nonsense in my opinion. I started to use 'mainboard controlled' Raid0 in my all in one workstations a long time ago (even as system-drive with multiple system-partitions) in addition to various other internal and external drives.. 2 x 256 GB HDD's -> 2 x512 GB HDD's -> 2 x 2TB HDD's -> + 2 x 512 GB SSD's. All drives was working far more than 50 000 hours. For single SSDs with two drives in a Raid0 for example, it even means writing only half of the data. (theoretically) In my mind the most Windows systems are that bad (mainstreamish) configurated that this and a Raid0 does not matter anyway. 😉
The only practical use-case of RAID0 is for heavily demanding sequential reads and writes. That being said, I find the use of it very niche, since with modern NVMe speeds, there aren't too many applications that have such heavy throughput. The only thing I can think of that might actually saturate a PCIe 4.0 M.2 drive is high-speed + high-res cameras. Otherwise, you'll actually likely lose performance if you RAID0 SSDs, due to the additional processing to stripe the data. If what you really need is just one gargantuan disk, JBOD is a better choice, since it is less complex and I think (I maybe mistaken) if a drive fails, you don't lose everything.
data/avatar/default/avatar26.webp
My understanding is that for games Rand 4k Q1 T1 is what matters since you can't defrag SSD's and data gets shuffled around to extend the drive's life. Also no news from Nvidia when RTX I/O is gonna get enabled ....
https://forums.guru3d.com/data/avatars/m/189/189438.jpg
For the sort of money you`d need to spend for a system capable of running that many drives i dont think that any budget minded pc gamer will see that sort of performance until we get more lanes on consumer cpu/motherboards without having to compromise greatly on how many storage media you can run.
data/avatar/default/avatar37.webp
The Goose:

For the sort of money you`d need to spend for a system capable of running that many drives i dont think that any budget minded pc gamer will see that sort of performance until we get more lanes on consumer cpu/motherboards without having to compromise greatly on how many storage media you can run.
When you see something like this it is usually 4 M.2 drives on a single card in a 16X slot running in 4X4X4X4 mode. This by definition requires 4 SSDs, an add in card and a motherboard + CPU that supports bifurcation and a lot of PCIe 4.0 lanes so yes, not even close to budget minded. 4 SSDs in RAID would not be for gamers anyway, that would be better suited for editing massive files and also requires a lot of RAM to make use of all of that data you are pulling from storage. RAID 0 also loves GHZ, especially when you have 4 SSDs in RAID. All in all you have to check a hell of a lot of boxes to take advantage of insanely fast storage. For the average Joe/Jane a 250GB 980 Pro + 2TB Crucial BX500 is more than enough fast storage and comes to $250, not a bad $ for storage in an upper midrange gaming system.
data/avatar/default/avatar07.webp
nice, this will improve the FPS in games!!! no? ;-)
data/avatar/default/avatar27.webp
willgart:

nice, this will improve the FPS in games!!! no? ;-)
Not really. If you have plenty of RAM then it won't affect fps but loading times. It might increase min fps and lower stuttering in situations where there is not enough RAM. More and more games are using texture streaming, this is true for a lot of open world games but as long as the data is in RAM before it is needed you should not see any difference in min fps or any stuttering. Nvidia RTX I/O would improve FPS in CPU limited scenarios as it offloads some of the work to the gpu. So far no game that i know of is using RTX I/O.
https://forums.guru3d.com/data/avatars/m/189/189438.jpg
bluedevil:

Not really. If you have plenty of RAM then it won't affect fps but loading times. It might increase min fps and lower stuttering in situations where there is not enough RAM. More and more games are using texture streaming, this is true for a lot of open world games but as long as the data is in RAM before it is needed you should not see any difference in min fps or any stuttering. Nvidia RTX I/O would improve FPS in CPU limited scenarios as it offloads some of the work to the gpu. So far no game that i know of is using RTX I/O.
Is the rtx i/o a 3xxx thing or is it 2xxx compatible, given the poor supply of 3xxx is there any point in using this tech atm....its a bit of a pi$$ take.....heres a new feature for you....but you`ll have to wait until we make the gpu`s.
https://forums.guru3d.com/data/avatars/m/279/279306.jpg
The Goose:

Is the rtx i/o a 3xxx thing or is it 2xxx compatible, given the poor supply of 3xxx is there any point in using this tech atm....its a bit of a pi$$ take.....heres a new feature for you....but you`ll have to wait until we make the gpu`s.
Actually RTX IO will support 30xx RTX and 20xx RTX GPUs, it supposed to be added next year when Microsoft adds DirectStorage to Windows 10. I think the new Xbox have it already, my guess Microsoft delayed it on purpose so their new Xbox perform better on things releated to loading and texture streaming at least for a while. So Nvidia (and AMD that most likely will make something similar) will have to wait until Microsoft adds it, maybe fall 2021 at latest I guess.
https://forums.guru3d.com/data/avatars/m/145/145154.jpg
Will MSi buy and then resell these to you for twice the price? ...screw MSi.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
Storage that run as hot as GPU and cost as much in most cases. Price need to drop dramaticly on NVMe for them to really take over SSD still have not taking over HDD cause of there cost. Ram Drive are still Ram drives even if there called NVMe it not new tech is always been there it also always been enxpense now it probably not expense and when NVM was made but still way to expense. Gona have think about HS for Storage now with NVMe at those speeds, so premium on top of premium, I love the fact I/O bottleneck are slow disapearing but the cost is just plain stupid imo
https://forums.guru3d.com/data/avatars/m/265/265196.jpg
schmidtbag:

The only practical use-case of RAID0 is for heavily demanding sequential reads and writes. That being said, I find the use of it very niche, since with modern NVMe speeds, there aren't too many applications that have such heavy throughput. The only thing I can think of that might actually saturate a PCIe 4.0 M.2 drive is high-speed + high-res cameras. Otherwise, you'll actually likely lose performance if you RAID0 SSDs, due to the additional processing to stripe the data. If what you really need is just one gargantuan disk, JBOD is a better choice, since it is less complex and I think (I maybe mistaken) if a drive fails, you don't lose everything.
Just installing, booting etc. Windows XP was a big, silent and cheap forward using 2 Hitachi HDDs in a RAID 0. Recordings with fraps + gaming itself, encoding movies.. the list is so long. In my mind no need to mention examples. Contras always belongs to "I don't know how (doing it the right way), I can't do that alone, I just go with the mainstream and only know things someone I don't know in person wrote in a (lamer)-forum." My very important data of the last 30 years has still in 2020 an amount of 33,9 GB. That's almost nothing. All other data I can find in the WWW but of course I have multiple backups. The last HDD I lost with a headcrash was a 60 MB Seagate in the mid 90s and it's still a nightmare to think about that. Learn from mistakes! 😉 I lost thousands of hours. Source Code! - Even today I could swear my mother hit my PC with the vacuum cleaner on purpose. 😀 🙁