Linux
Welcome to c/linux!
Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!
Rules:
-
Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.
-
Be respectful: Treat fellow community members with respect and courtesy.
-
Quality over quantity: Share informative and thought-provoking content.
-
No spam or self-promotion: Avoid excessive self-promotion or spamming.
-
No NSFW adult content
-
Follow general lemmy guidelines.
view the rest of the comments
In 40+ years of using HDD I can count failures on one hand. Generally related to power issues. I have many well over 70000 hours. I recently picked up 2 used 12Tb Enterprise drives for less than the cost 1 consumer 12Tb drives to add to the mix as well. I have another 8 to 12 decommissioned enterprise drives in different systems.
You never trust your data to a single drive or single medium. Otherwise effectively you've already lost it. And dollar per dollar SSD simply cannot beat traditional hard drives for capacity. Just seek time and transfer rate.
Just my music library is over a terabyte of largely 320 bits per second mp3. Storage for miscellaneous videos about six times that. And then my streaming library of video. Has been traditionally large enough to make re-encoding and shrinking worth while to get more. Upgrading from divix/xvid in the late 90s early aughts. H264 in the early 2010s. H265 in the late 2010s. Currently converting to av1 from source discs etc. Some of the spinning rust I am using has seen all those Transitions and been Rewritten many times. Which would have been very rough on an SSD. LOL I may have a problem.
Regardless there's nothing wrong with any particular storage technology. No reason to avoid one over the other as long as it does what you need. And if you're data is small enough to fit economically on an SSD then they will suit your solution perfectly. Just remember your three, two, one.
At what point do you consider replacing a drive?
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn't lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it's not too much issue for me to have a day of downtime replicating data back across drives. It'll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It's not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn't have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.