I Nearly Lost All Of My Data!

I'm Kev and I'm a Cyber Security professional from the UK. I started this blog so that I could share my thoughts on the things that interest me. That's usually technology, Linux, open source, and motorbikes.

Leave a Reply

Comment as a guest.

  1. Hi,

    My syno just crash a few days ago. (DS212j, 2 HDD in RAID1)
    I manage to recover all my data because the disk were fine and readable under Windows.

    For the syno : doesn’t react to the power button but … I tried it with another power supply and … it seems to start.

    May be it’s the same thing for you : just try it, before buying a new NAS 😉

    1. I thought that myself, but the status light was on the PSU, and the connector was getting power too. So I don’t think it was PSU issue – glad you got your sorted so easily though!

      1. You know what ? On my PSU, the status light is green, and I’ve tested the PSU with a ultimeter : the voltage is correct … but the syno don’t start …
        This is a real mystery, isn’t it ? 😉

  2. Having just recently experienced a double Synology x15 failure, i switched to a DYI NAS instead. My data was never in any danger, as i backup to a local drive and a remote system that i control. I also burn M-disc archives yearly, as well as update/freshen a couple of USB drives with the last years data.

    I built a 6 drive NAS with 2 x 1TB SSDs for RAID1, which holds our data, and the most critical stuff is also replicated through Resilio Sync to a Raspberry Pi backup node that runs at home. (Encrypted folder mode).

    The remaining 4 drives run via MergerFS and SnapRaid. It doesn’t stripe data across disks, so if i pull a disk, it will contain all the files that are stored on that particular disk.

    As for backups, I have a local USB drive that i backup to, and a remote Odroid HC2 located at a friends house (through 100/100 fiber). I used to have a remote Synolody 1xxj model for this, but the Odroid is cheaper, and i trust Debian/Ubuntu to deliver software updates much much faster than Synology does. They’ve improved these past 3-4 years, but some critical updates still take a long time to get released.

  3. Having feared this very scenario, this is really strange to read – because of the similarities to my own setup! My partner and I also use my 4 * 1TB synology box to store our stuff, as well as “media” (the non-unique stuff that can easily be re-obtained, but is convenient to have there, at least), and we have a 1TB USB drive connected to it to back things up to as well. But the photo of your floor did it for me – the carpet (even in that light) looks the same as mine and, although it isn’t an Antec 300 case, seeing it open there, with the plastic chair mat and what could be some twiddly seating at the top, I just wonder if this kind of setup is a Brit thing. Glad you got your data back by the way. What would you do differently in future, glacierification?

    1. That is eerily similar! There’s a few things I’m going to do differently. I’m taking the opportunity to re-start from scratch. I will be doing a follow up posts on this blog that will have all the details of the new setup. You won’t be doing that, obviously. So for you, I would just say add offline backups. Glacier or Backblaze are the front runners it seems.

  4. FWIW, next to loosing a living thing near to me, I fear this the most. My setup: 8 bay synology as main NAS. Lots of room on that, so I only protect the important stuff (pictures, videos, work product, etc.). First level of protection: backup to another synology (2 bay) that’s on prem. That way if the prod synology goes down, I have very quick access to most of my data. Level 2: buddy of mine has a synology at his place. Prod synology also replicates to that synology (benefits of fiber to the home and no caps). That’s first level of offsite. I also(!) have a dropbox account and a box account (yes – I’m stupid paranoid – once bitten and all that). Both box and dropbox get a copy of the very most critical data (work products). Google Photos has at least a copy of all the video and photo data – which is what’s most important to my wife. Finally – every device has a dedicated UPS. They both filter the power coming into them as well as provide a mechanism for a graceful shutdown in case the power outage is long enough.

    This probably seems like overkill, and it’s likely that it is – but I do enterprise data protection for a living. My clients have had the weirdest events happen to them (data centers flooding, rogue employees, crypto hackery, data centers blowing up, data centers having industrial refrigerators fall on them, and much more). They have taught me that my data is my most important thing – and I can re-buy all the rest. So – I spend on protecting the data at the expense on not always having all the toys I want.

  5. I have a high end HP engineering workstation laptop from 2012, which are coming off lease and available cheap on ebay. Actually, I have 2, one in my office and one offsite in my shop, connected by an Ethernet cable. Both machines have 2 hard drives, and an external USB drive. They are mirrored, and get synched several times per day. If one goes down (it happened), I don’t even need to do a restore, I just grab the other machine almost as if nothing happened. Just buy another one on ebay for around $500.00, and clone the drives. I keep separate system and data drives, so good for about 2Tb of data, backed up off site. We are in a rural area at the end of the power line, so needless to say we have high end surge protectors on everything.

  6. Offsite is the only thing that gives me peace of mind. I use Backblaze, but they’re all as good as each other. ~£40 a year and I know if the house burns down I’ve still got it all.

  7. Hi,
    I was currently wondering about these issues couple month ago, and I ended up with this setup:
    – 1 old 1 bay NAS as a local cache, at home
    – 1 2 bay Raid1 NAS at the office, I think RAID1 is enough for HW failures on a safe electrical network.
    – 1 remote backup daily (versioned) to prevent issues in case of fire/robbery + versioning to prevent accidental manipulations
    For the remote one, Synology Cloud seems enough.
    VMs and so are backed up with everything else, since they are just ISCSI blkdevs.
    USB drive is OK in some case, but you can always be robbed. 2 Physical locations is a must IMO.

    1. Depending on the disks inside your HW RAID1 NAS, that’s a dubious bet.
      Especially with two drives of the same series, even more so if they’re of the same charge from the same factory line, when nearing their estimated end-of-life, they may very well fail pretty simultaneously. You should keep a spare ready to replace a failing drive on moments notice, because you might not have the time to think about it once one of the two drives fail. And I’m talking hours, not days.

      That’s from a professional storage system deployment engineer’s perspective… 😉

  8. That sucks. But I’ve somewhat planned for that. I have two Synology NAS devices and all my other computers back up to one of them, which rsyncs the data to the other one. And there’s a 10TB USB drive in there as a next level backup of the most important data. My house has whole-house surge protection and all the NAS & USB devices are powered by UPS. On the list is running ethernet cables to the upstairs of my house and moving one of the synology drives physically far from the other.

    I’m still nervous…

  9. Such crucial devices as a single-point NAS box should always be protected by surge/burst filters, and not only on their power supply, but also on the data landline.

    My family lost a couple of hundred bucks worth of equipment once to a lightning strike into the telephone network (the telco lost the entire hub, everyone in the street lost their PBX’s and DSL modems).
    Better yet, protect the system with an UPS which usually contains the necessary power filters and if not the cheapest model might also contain a filter for the internet landline.

    As for your request of data loss horror story, I’ve once lost an entire disk worth of data to a faulty disk board – the vendor’s disk series was then nicknamed “Deathstar”.
    Luckily I didn’t destroy that disk’s predecessor, so the most crucial data was still available, but at that time it was pretty bad for me.
    Since that time I do have more than two copies of my data…

      1. Well, originally, the series was called “Deskstar” by its vendor, and the entire harddisk division was swiftly sold to Hitachi after the incident…

        I almost forgot to mention: A good PSU will act as a surge/burst protector for the device it powers. But only once and it is toast after that.
        So it might have been that just replacing the NAS’s PSU might have brought it back to live.

        I’ve seen it before and it was rarely the board behind the PSU which suffered from the power spike on the supply line.

        Note: This is completely different from a spike coming in on a network line. Those will hit the board directly and might even get through to any storage device connected to it. Thus I think it even more important to have a surge/burst filter on the landline than on the power line. Your PSU’s might be toasted and you will be down until you can replace them, yet once done, you will probably be up and running again with no serious loss of data, just time and (insurance) money.
        Once you’ve been hit by a lightning strike through your telephone or cable tv line, your electronics are gone for good. All of them.
        Be aware of that and if your cable TV set is connected to your NAS by hard wired network, that has to be filtered too. With satellite dishes – you need a good lightning rod on the dish’s mount post, so the lightning never hits the LNB in front of the dish. And if your antenna, network or telephone cables are laid in parallel to your power cables, there’s always induction once lightning hits one of them. Have them screened off good.

  10. One simple solution might be to have 2 external USB discs for backup – with only one of those being connected to anything at any time. The other is to be offline and unplugged. Swap the drives on a daily/weekly base.
    Maybe chip in a third disc for weekly/monthly swap to a remote location (work, parents/friends home) to cover real disasters.

    1. I’ve got friends who do it that way, but I’d forget, I know I would. If you haven’t guessed already, I’m not the sharpest tool in the box! 🙂

    1. Thanks mate, it’s actually the 3rd time I’ve been on the front page, but who’s counting ey? 🙂

      Hope you’re well!

  11. For my most crucial data. I setup ownCloud and let it sync across all of my desktops and laptops. I also keep one laptop (cheap one with a huge hard drive) that I turn on every day to let it sync then turn it off and disconnect from power strip.

  12. No surge suppresor? We have one from APC. It doesn’t work without a good ground circuit.

    1. Nope, no surge protection. Other comments say why, but the short version is that I got complacent.

  13. You can use your old setup, but with additional tape drive for backups and cartridges weekly rotation.

    1. I could, but I know that in reality I will often forget to swap tapes. I need automation.

    1. I try not to use Google services where possible, but I will be adding an off-site portion to my new system.

    1. The NAS was 5 years old. SSD back then would have been way to expensive for the volume I needed. RAID 5 seemed like a reasonable compromise.

  14. Dodged a bullet, for sure 🙂 Definitely have off-site backup, like JAS says. Out of curiosity, why did you cancel Glacier? And did you find it a good backup method?

    1. I was extremely lucky! I cancelled because I got complacent. The NAS had been working fine for 5 years, so I decided in my infinite stupidity that I didn’t need off-site for my basic needs. Yeah, I’m a fool, I know!

      Glacier works really well. It’s super cheap storage. They make the money when you need to pull the data out, but even then it isn’t super expensive. It’s a good DR tool.

  15. You can try using BackBlaze for backup of data. They seem very reliable and offer the lowest cost I could find. I’ve been using them for a few personal projects to store data and moving to store server backups there now. Will see how it goes. Maybe give them a try.

  16. 1 You dont mention a UPS, or possibly 2 of them, one for each back up, they are cheap these days and it is not much trouble to extend a DC power lead out of the box so that you can use a pair of car batteries in parallel to get the 24 V that they mostly seem to run on. I do this because the little gel cells in them are overpriced and under specced. I can often buy 2nd hand UPS cases that have been donated to a recycling store nearby because they are out of warranty, so for about the price of a single low rated UPS I can build 2 much higher rated units.
    2 You mention RAID. There is not much point in it in these days of solid state drives which achieve the extra speed of a RAID without all the risk and moving parts that wear unpredictably. RAID 5 has lots of potential points of failure. Simple is best.
    3 Odds are it was the cleaner using the HD power circuit to run a vacuum cleaner. They have a huge current inrush when the motor starts from cold.
    4 We are all getting a bit giddy with big drives at low prices per terabyte, but as the data density on the platter goes up the danger of losing a sector due to random errors increases exponentially. Smaller drives, 500 Gb or so are safer, because the individual bits on the platter are larger, and so they can sustain more damage before becoming unreadable. For the same reason DVD’s are a lot less reliable than CDs. Older technology has fewer bugs.
    5 I feel your pain. I have lost 4 brand newish back up disks in the last 2 years, which is more that the total over the last 30 years.

    1. Think the issue boils down to complacency and stupidity on my behalf. I’ll respond to each of your points for completeness though:

      1) I did not have surge protection or UPS in front of the NAS. I just assumed (stupidly) that in 2019 the power grid in the UK was better than that – I won’t be making that mistake again.

      2) The Synology was 5 years old. When I bought it, 3TB of SSD storage (my data requirement, roughly) would have been extremely expensive. That’s why I went with RAID.

      3) It probably was the cleaner that caused the failure, but it wasn’t her fault. It’s my own stupid fault for not having a system in place robust enough to cope with something that simple.

      4) That’s a really good point actually, I hadn’t thought of it like that before.

      Thanks for the comment/advice.

  17. I always recommend a 3-2-1 backup strategy at minimum:

    https://www.backblaze.com/blog/the-3-2-1-backup-strategy/

    For me, a production copy, restic to backup to a local, but separate, hard disk, then reclone those snapshots to the cloud using one of rclones backends. I’ve been using Google Cloud’s $300 of free credit to do it on the cheap, but might be switching to Backblaze B2 to save the hassle.

    1. A few people have recommended Back Blaze to me now, I’ll definitely check them out. Thanks.

  18. A nightmare scenario for everyone, however was your NAS and USB drive connected to a good filtering UPS (that should protect it against power outages/spikes/brownouts?)

    1. It actually wasn’t, and now I look back I can’t believe how stupid I was to assume that in 2019 UK, they won’t have power surges.

  19. 3-2-1 rule. If you really care about any given piece of data you need to have at least three copies of it, stored in at least two different physical locations, at least one of which is offline.

    The offline part is more and more important these days in a world of crypto ransoms.

    Auto-syncing backups without snapshots or other methods to access old versions count as a half of a copy, as they’ll save you from most “that disk will never work again” issues but offer no protection against stupidity or malice.

    1. You’re absolutely right. After 5 years of running the Synology without issue, I got complacent. Hopefully others may learn something from my stupidity. 🙂

  20. Consider investing in a UPS with surge protection and equipment protection guarantees. My CyberPower UPS (https://www.amazon.com/gp/product/B000XJLLKG) guarantees up to $300k worth of connected equipment for up to 3 years. I run an entire rack worth of stuff on that UPS and it’ll maintain power for up to an hour. Highly recommend!

    1. Yeah, I will definitely need surge protection on the new setup, thanks for the comment.

    2. Get a drive dock for <$30 and make backups to spare disk drives you keep elsewhere. Backups to media that are always online will always be vulnerable. Write a script to make it easy, or you won't do it.

  21. Do an off-site hackup. A lightning strike can kill all your drives in one location or someone might just steal everything from one place.

    1. You’re absolutely right – I used to have off-site backups, but decided in my infinite “wisdom” to cancel them. The new solution will definitely have off-site capabilities.

      1. FWIW, I have an almost identical set up to yours except the external USB *does* get a complete mirror of the NAS and I periodically swap that with one in a desk drawer away from he

        1. Good plan! Don’t get complacent and make the same mistakes I did.

    2. No kidding. I used to store all my data and pics on a raid1 NAS thinking who needs cloud If I can do the job just fine but one day I got my home robbed and yes they took the Nas. I never saw all that data again.

        1. Best idea: cross off-site backup with a trusted friend. Your backup to his server and his backup to your server. Each should own the off-site disks used for backup. Preferably not very far from your home, but not in the same apartment block.

Read Next

Sliding Sidebar