Backblaze Publishes Hard Drive Stats for 2019: Failure rates on the rise
Backblaze published its 2019 hard drive failure rates for the data drive models in operation in their data centers. Backblaze had 124,956 spinning hard drives. Of that number, there were 2,229 boot drives and 122,658 data drives.
2019 Hard Drive Failure Rates
At the end of 2019 Backblaze was monitoring 122,658 hard drives used to store data. For evaluation they removes from consideration those drives that were used for testing purposes and those drive models for which did not have at least 5,000 drive days during Q4. This leaves us with 122,507 hard drives. The table below covers what happened in 2019.
There were 151 drives (122,658 minus 122,507) that were not included in the list above. These drives were either used for testing or did not have at least 5,000 drive days during Q4 of 2019. The 5,000 drive-day limit removes those drive models where we only have a limited number of drives working a limited number of days during the period of observation. The only drive model not to have a failure during 2019 was the 4 TB Toshiba, model: MD04ABA400V. That’s very good, but the data sample is still somewhat small. For example, if there had been just 1 (one) drive failure during the year, the Annualized Failure Rate (AFR) for that Toshiba model would be 0.92%—still excellent, not 0%. The Toshiba 14 TB drive, model MG07ACA14TA, is performing very well at a 0.65% AFR, similar to the rates put up by the HGST drives. For their part, the Seagate 6 TB and 10 TB drive continue to be solid performers with annualized failure rates of 0.96% and 1.00% respectively. The AFR for 2019 for all drive models was 1.89% which is much higher than 2018. We’ll discuss that later in this review.
Beyond the 2019 Chart—“Hidden” Drive Models
There are a handful of drive models that didn’t make it to the 2019 chart because they hadn’t recorded enough drive-days in operation. We wanted to take a few minutes to shed some light on these drive models and where they are going in our environment.
Seagate 16 TB Drives
In Q4 2019 we started qualifying Seagate 16 TB drives, model: ST16000NM001G. As of the end of Q4 we had 40 (forty) drives in operation, with a total of 1,440 drive days—well below our 5,000 drive day threshold for Q4, so they didn’t make the 2019 chart. There have been 0 (zero) failures through Q4, making the AFR 0%, a good start for any drive. Assuming they continue to pass our drive qualification process, they will be used in the 12 TB migration project and to add capacity as needed in 2020.
Toshiba 8 TB Drives
In Q4 2019 there were 20 (twenty) Toshiba 8 TB drives, model: HDWF180. These drives have been installed for nearly two years. In Q4, they only had 1,840 drive days, below the reporting threshold, but lifetime they do have 13,994 drive days with only 1 drive failure, giving us an AFR of 2.6%. We like these drives, but by the time they were available to us in quantity, we could buy 12 TB drives at the same cost per TB. More density, same price. Given we are moving to 16 TB drives and beyond, we most likely will not be buying any of these drives in the future.
HGST 10 TB Drives
There are 20 (twenty) HGST 10 TB drives, model: HUH721010ALE600 in the operation. These drives have been in service a little over one year. They reside in the same Backblaze Vault as the Seagate 10 TB drives. The HGST drives recorded only 1,840 drive days in Q4 and a total of 8,042 since being installed. There have been 0 (zero) failures. As with the Toshiba 8 TB, purchasing more of these 10 TB drives is unlikely.
Toshiba 16 TB Drives
You won’t find these in the Q4 stats, but in Q1 2020 we added 20 (twenty) Toshiba 16 TB drives, model: MG08ACA16TA. They have logged a total of 100 drive days, so it is way too early to say anything other than more to come in the Q1 2020 report.
Comparing Hard Drive Stats for 2017, 2018, and 2019
The chart below compares the Annualized Failure Rates (AFR) for each of the last three years. The data for each year is inclusive of that year only and for the drive models present at the end of each year.
The Rising AFR in 2019
The total AFR for 2019 rose significantly in 2019. About 75% of the different drive models experienced a rise in AFR from 2018 to 2019. There are two primary drivers behind this rise. First, the 8 TB drives as a group seem to be having a mid-life crisis as they get older, with each model exhibiting their highest failure rates recorded. While none of the rates is cause for worry, they contribute roughly one fourth (1/4) of the drive days to the total, so any rise in their failure rate will affect the total. The second factor is the Seagate 12 TB drives, this issue is being aggressively addressed by the 12 TB migration project reported on previously.
The Migration Slows, but Growth Doesn’t
In 2019 we added 17,729 net new drives. In 2018, a majority of the 14,255 drives added were due to migration. In 2019, less than half of the new drives were for migration with the rest being used for new systems. In 2019 we decommissioned 8,800 drives totaling 37 Petabytes of storage and replaced them with 8,800 drives, all 12 TB, totaling about 105 Petabytes of storage, then we added an additional 181 Petabytes of storage in 2019 using 12 TB and 14 TB drives.
Manufacturer diversity across drive brands increased slightly in 2019. In 2018, Seagate drives were 78.15% of the drives in operation, by the end of 2019 that percentage had decreased to 73.28%. HGST went from 20.77% in 2018, to 23.69% in 2019, and Toshiba increased from 1.34% in 2018 to 3.03% in 2019. There were no Western Digital branded drives in the data center in 2019, but as WDC rebrands the newer large-capacity HGST drives, we’ll adjust our numbers accordingly.
Lifetime Hard Drive Stats
While comparing the annual failure rates of hard drives over multiple years is a great way to spot trends, we also look at the lifetime annualized failure rates of our hard drives. The chart below shows the annualized failure rates of all of the drives models in production as of 12/31/2019.
Backblaze Published Hard Drive Stats for Q1 2019 - 04/30/2019 06:40 PM
As of March 31, 2019, Backblaze had 106,238 spinning hard drives in our cloud storage ecosystem spread across three data centers. Of that number, there were 1,913 boot drives and 104,325 data drives. ...
2018 HDD Failure rates report from Backblaze - 01/24/2019 08:58 AM
Backblaze has published its annual hard drive statistics in which the company shares numbers on failure rates of the nearly 105,000 the company has in its data centers. The company publishes the annua...
Backblaze Hard Drive Stats for Q3 2018 - 10/18/2018 08:48 AM
As of September 30, 2018 Backblaze had 99,636 spinning hard drives. Of that number, there were 1,866 boot drives and 97,770 data drives. This review looks at the quarterly and lifetime statistics for ...
BACKBLAZE Releases HDD Stats for Q2 2018 - 07/25/2018 04:49 PM
As of June 30, 2018 they had 100,254 spinning hard drives in Backblaze’s data centers. Of that number, there were 1,989 boot drives and 98,265 data drives. This review looks at the quarterly...
Backblaze Hard Drive Stats for Q1 2018 Have Been published - 4TB HGST HDDs Very Reliable - 05/03/2018 07:37 AM
It is always fun to check this list out, as of March 31, 2018 they had 100,110 spinning hard drives. Of that number, there were 1,922 boot drives and 98,188 data drives. This review looks at the quar...
Junior Member
Posts: 5
Joined: 2018-11-30
Corrected: Seagate always shows higher failure rates even though their sample sizes are way bigger.
Seagate can't say it was a bad batch.
I think you need to look closer at the correlation between drive days and the AFR.
I always take these statistics with a grain of salt. I think the best conclusion that you can draw, is even if a drive is prone to failure, the failure rates are still very low.
Also you cannot compare the conditions in a data center to those in a home PC.
If you want some really good info on how operating conditions affect drives take a look at this study from google.
Member
Posts: 62
Joined: 2016-12-31
Not sure what I can say here, but Seagate always puts out internal responses to these stats.
Personally, I like to see apples to apples how different consumer drives hold up under enterprise load. Sometimes you have a poorly written programs that do not use buffers well and the constant direct writes can just hammer the crap out of a kinetic hard drive.
Oranges to Oranges, consumer drives do power up and down a lot. Where enterprise drives rarely if ever get restarted. Alot of failures are actually just firmware lock ups and a restart would have recovered them.
Member
Posts: 6782
Joined: 2008-03-06
Bought a HGST drive two years ago, and without even knowing that it was in BackBlaze's list of most reliable in it's category.
After the list was published, the price on that particular SKU skyrocketed.
Bought it at 120€ , now you are lucky to find it under 200€.
We are talking EU prices here, you guys over the pond you don't realise how easy you guys have it.
And yes, Toshiba and HGST drives have me the most pleasant experience, followed by defunct Samsung Spinpoint F1.
Western Digital are ok and reliable, by nemesis are Seagate hard drives.
But whatever is being said, Seagate hard drives are the easiest to repair and recover data from ( not helium ones) other brand are a hellish experience to even begin. Western Digital is a total PITA to recover data from ( USB connector instead of standard S-ATA)
Senior Member
Posts: 4195
Joined: 2003-03-03
The data center conditions are the same for all brands and models.
I have read that a looong time ago (because that study is from 13 years ago). Although most things are still true today, technology and quality have evolved (for better or worse) over time, especially after more than a decade.
I'm pretty sure that Backblaze's engineers aren't aware of that. Please contact them to apply for a job as top senior engineer there. I'm sure they'll pay you top money for that kind of knowledge!
Senior Member
Posts: 242
Joined: 2012-10-04
That's not how statistics work. With this many samples the confidence level would be 98% that these are accurate failure rates within 0.2%.
And Hgst is Western Digital.
Hgsts WORST model failure rate is still better than Seagate BEST model failure rate.