MLC SSDs just as reliable as SLC anno 2016 says Google
If there is one party that makes use of a lot of NAND based storage, well .. it's Google in their data-centers of course. They published the results of a study on production lifecycle data, SSD reliability and guess what SSDs fail differently than HDDs.
ZDNet has the skinny on this one in a preliminary article; FAST 2016 paper Flash Reliability in Production: The Expected and the Unexpected, (the paper is not available online until Friday) by Professor Bianca Schroeder of the University of Toronto, and Raghav Lagisetty and Arif Merchant of Google, covers:
- Millions of drive days over 6 years
- 10 different drive models
- 3 different flash types: MLC, eMLC and SLC
- Enterprise and consumer drives
KEY CONCLUSIONS
- Ignore Uncorrectable Bit Error Rate (UBER) specs. A meaningless number.
- Good news: Raw Bit Error Rate (RBER) increases slower than expected from wearout and is not correlated with UBER or other failures.
- High-end SLC drives are no more reliable that MLC drives.
- Bad news: SSDs fail at a lower rate than disks, but UBER rate is higher (see below for what this means).
- SSD age, not usage, affects reliability.
- Bad blocks in new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to die or chip failure.
- 30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment.
Two standout conclusions from the study. First, that MLC drives are as reliable as the more costly SLC "enteprise" drives. This mirrors hard drive experience, where consumer SATA drives have been found to be as reliable as expensive SAS and Fibre Channel drives.
One of the major reasons that "enterprise" SSDs are more expensive is due to greater over-provisioning. SSDs are over-provisioned for two main reasons: to allow for ample bad block replacement caused by flash wearout; and, to ensure that garbage collection does not cause write slowdowns.
The paper's second major conclusion, that age, not use, correlates with increasing error rates, means that over-provisioning for fear of flash wearout is not needed. None of the drives in the study came anywhere near their write limits, even the 3,000 writes specified for the MLC drives.
But it isn't all good news. SSD UBER rates are higher than disk rates, which means that backing up SSDs is even more important than it is with disks. The SSD is less likely to fail during its normal life, but more likely to lose data.
Don Vito Corleone
Posts: 45920
Joined: 2000-02-22
Whoops !
Senior Member
Posts: 813
Joined: 2009-11-30
SSD age, not usage, affects reliability.
this claim ... i wonder how they test it ...
its hard to believe they purchase some SSD just to store it
last time i read that ssd only can retain/hold data 3-4 months for enterprise while consumer ssd can hold around 1 year
now another cons of SSD...
i think we still need more time before completely ditching tradiotional hdd and move to SSD completely
and if that limitation of memory-chip, then it might as well will never replacing traditional hdd
Senior Member
Posts: 2531
Joined: 2010-05-26
"30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment".
WTH does that even mean? Such a wide difference between 30% and 80%.
Is it different brands some 30 some 80% or what.
Senior Member
Posts: 14046
Joined: 2004-05-16
"30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment".
WTH does that even mean? Such a wide difference between 30% and 80%.
Is it different brands some 30 some 80% or what.
80% of devices develop a bad block 30% of the time.
lol, no idea
Senior Member
Posts: 324
Joined: 2009-03-17
do you mean mlc just as reliable as slc?