Western Digital 2TB-6TB WD Red NAS are using SMR, an issue with NAS RAID? (Updated)

Published by

Click here to post a comment for Western Digital 2TB-6TB WD Red NAS are using SMR, an issue with NAS RAID? (Updated) on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Despite the huge cloud and server business, (traditional) magnetic recording is still a sunset business already. I'd say this meant-to-go-unnoticed decision by WD is a sign of it. Good to know, nonetheless. For some years I've been pondering whether to get a simple and cheap 2-bay NAS. I'd have most probably got Red WDs for it, likely 6TB. After reading this, who knows. It would suck to use RAID 1 and then notice you can't replace a failing disk. Would make it pretty pointless. In fact I imagine if you lost a disk, got a new one, but the NAS rejected it, you'd think you got a bad replacement disk. You'd return it for another new one. But if the second one failed as well most people would think the NAS device is broken!
data/avatar/default/avatar31.webp
That's a major bummer. I have a RAID10 setup with Reds. I'll have to be sure to back it up to my external drives more frequently if I can't rely on being able to swap a disk and rebuild.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
WD reds are bad nas drives in general, you're better off getting a wd black or hgst and not suffering weak sectors near the edge of the platter.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
Astyanax:

WD reds are bad nas drives in general, you're better off getting a wd black or hgst and not suffering weak sectors near the edge of the platter.
WD red are the silent one (mostly 5400rpm) and small cache, if you compare them with black and their huge cache HGST that both have full speed... then of course they look bad. Technicaly is the overpriced price of the WD red PRO over the WD red that make our company try Seagate ironwolf. The test over years is very successfull, but ok they are a lot noiser, more crrrr crrrrr crrrr and scary tzzzzzic... but very good result (all the company work on the same stockage, and so intensive work for them). We will see for next year main server update.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
That response from WD doesn't address the issue, adding a new drive, at all. Typical corporation (or politician) response that talks about things broadly, as if read straight from a marketing brochure, without answering the question itself.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
rl66:

WD red are the silent one (mostly 5400rpm) and small cache, if you compare them with black and their huge cache HGST that both have full speed... then of course they look bad.
read a whole post before replying, thanks.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
This really p*sses me off. I have a raid setup with 8x4TB WD Red drives (7 drive RAID6 with Hotspare). I specifically bought WD RED's because I thought they would 'for sure' be proper Nas drives without any of this crap. I would have spent my money elsewhere, and from now on, Western Digital is Dead to me. Trust is earned over years, and is lost in seconds. Do HDD vendors not remember the IBM Deathstar's and the SeaGate 1TB debacles ? I do. It just about killed IBM's hard drive business (sold to Hitachi). Toshiba and Seagate are doing this too, btw. (Tom's H. article from 6 days ago) Seagate Barracuda 2/4/8TB drives use DM-SMR, and one of the 5TB Desktops HDD's. the Ironwolf's don't seem to, unless they explicitly state so in the documentation. P300 Toshiba drives are concerned by this, too. edit : More clarification from blocksandfiles.com : For Western Digital Red drives, EFRX drives are CMR, the newer EFAX are DM-SMR. I have all EFRX drives, so my RAID is fine 😉 Phew.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@Evildead666 hehe seems like you dodged the bullet thankfully , i can not imagine spending money to raid your data for extra layer of safety just in case just to learn that the raid will protect nothing.... That said this kind of thing will make us to double check what method the drives use when we aim for a raid from now on.
data/avatar/default/avatar24.webp
Astyanax:

WD reds are bad nas drives in general, you're better off getting a wd black or hgst and not suffering weak sectors near the edge of the platter.
could u provide more information "weak-sector near the edge" on WD reds? i look over the google and find nothing about this and ur post is in top of the search lol and ur suggestion basically more like just get pro/enterprise version, than cheap NAS drive btw regarding this SMR drive... its more compability issue, what interesting is that there no similar report with seagate ironwolf SMR drive anyway if u using RAID1(10) it shouldnt cause catastrophic volume/array indeed this transition might cause issues to many NAS owner when adding new(SMR)drive , then getting drive failure but after a while, most (if not all) HDD will be SMR anyway based some report for WD hdd, all batch after march 19 all is SMR such WD20EFAX (2TB hdd) seems also SMR also from what i read so far, the issue happen with high IO (maybe related to lower IO performance with SMR drives), so home/light-user might not getting any issue
data/avatar/default/avatar13.webp
This is very disappointing indeed. I can understand why they would use SMR technology with WD Blue drives but not with something that is used with an NAS. With Drives having this garbage technology makes want to go out a purchase a WD gold or Seagate Iron Wolf drive and and a USB 3 enclosure and make my own external drive.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
Astyanax:

read a whole post before replying, thanks.
It's not against you, it was to add thing that make people think WD Red are bad they are obviously based on green and blue exept the microcode, and so have the same fail.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
well its not the speed, its the clearly low quality platters being used that most of them have slow spots and weak sectors right from the factory.
Fender178:

This is very disappointing indeed. I can understand why they would use SMR technology with WD Blue drives but not with something that is used with an NAS. With Drives having this garbage technology makes want to go out a purchase a WD gold or Seagate Iron Wolf drive and and a USB 3 enclosure and make my own external drive.
NAS is the right place to use SMR, when SMR is used right and paired with MTC.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
slyphnier:

*snip btw regarding this SMR drive... its more compability issue, what interesting is that there no similar report with seagate ironwolf SMR drive anyway if u using RAID1(10) it shouldnt cause catastrophic volume/array indeed this transition might cause issues to many NAS owner when adding new(SMR)drive , then getting drive failure but after a while, most (if not all) HDD will be SMR anyway based some report for WD hdd, all batch after march 19 all is SMR such WD20EFAX (2TB hdd) seems also SMR also from what i read so far, the issue happen with high IO (maybe related to lower IO performance with SMR drives), so home/light-user might not getting any issue
None of the Seagate Ironwolf, or Ironwolf pro drives use SMR. If it stays that way, my next drives, probably 8TB ones, will all be Seagate.
https://forums.guru3d.com/data/avatars/m/266/266438.jpg
I guess I got lucky. My 4TB Reds are CMR.
data/avatar/default/avatar31.webp
Astyanax:

https://www.backblaze.com/blog/wp-content/uploads/2016/11/blog_q3_2016_stats_table_3.jpg all the 2TB WDC's are WD Reds 15/133 drives is a high failure rate, also you can identify thousands of individual failures on google just in a search narrowed down to reddit.
that table is from : https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2016/ that already to old first it didnt telling about "weak-sector near the edge" at all and blackblaze table really proofing anything either, especially NOT like A brand better than B brand, they mixing consumer hdd with enterprise HDD those 4TB RED, not even RED PRO model.... while there rest of HDD are enteprise hdd so those table only good as far telling specific model (-batch) there are article saying there no difference between enterprise and consumer.... some even said enterprise is just HDD with "better tested and support" in low-workload, it most likely there not much difference whatsoever, true but based those backblaze report and my experience (with small sample), in high-workload scenario, it prove reliability between between those hdd
Evildead666:

None of the Seagate Ironwolf, or Ironwolf pro drives use SMR. If it stays that way, my next drives, probably 8TB ones, will all be Seagate.
yeah i am wrong about that, my memory kinda messed up with seagate archive hdd (https://www.seagate.com/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf) back then when reading about seagate SMR hdd really similar to WD, they not transparent enough which SMR HDD
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
once a drive is put into production there usually aren't any physical changes to the line until its end of production, firmware changes permitting.
data/avatar/default/avatar25.webp
Some WD Red got new revisions, like the new 4TB version now has 3 disks (68N32N0) instead of 4 (68WT0N0). I also confirm the ~3°C cooler and produce a lot less vibrations. The supported ACS version is also updated (ACS-3 rev4 vs ACS-2)
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Alessio1989:

Some WD Red got new revisions, like the new 4TB version now has 3 disks (68N32N0) instead of 4 (68WT0N0). I also confirm the ~3°C cooler and produce a lot less vibrations. The supported ACS version is also updated (ACS-3 rev4 vs ACS-2)
correct, i am corrected, my 3 platter efrx has those numbers different to my 2 platter efrx, both are pieces of crap though and i've had a 100% failure rate out of 2 different revisions in platters.
data/avatar/default/avatar14.webp
Astyanax:

correct, i am corrected, my 3 platter efrx has those numbers different to my 2 platter efrx, both are pieces of crap though and i've had a 100% failure rate out of 2 different revisions in platters.
My last 68WT0N0 died yesterday... I was waiting an RMA (temporary stopped due COVID-19 crisis) for another 68WT0N0 still in warranty and the RMA resumed 1 day before the death of that disk :| I didn't write on that disk for 2 months, I thought it was just a head or a platter failure, but looks like the logic board is also involved since it is no more recognized by any OS and if attached to any MB I tried it lock the MB boot. The platters rotate but nothing more. I lost over 3GB of anime ~_~