Author: Kev Quirk

Are We Cyborgs?

I was recently listening to the Joe Rogan podcast where he was interviewing Elon Musk. During that interview they discussed a lot of interesting topics – it’s a very interesting episode and well worth a listen.

One of the topics that Joe and Elon discuss is whether we will become cyborgs in the future. Elon then made a claim that really struck me…

We’re already cyborgs!

Crazy thought, right? I thought the same when Elon first said it, but once he elaborates things start to make sense. Let’s start with the definition of a cyborg:

A person whose physiological functioning is aided by or dependent upon a mechanical or electronic device.

So Elon’s argument for many of us being cyborgs, is that we’re all aided in many aspects of our lives by the Internet and our smartphones.

Even if you think about the sci-fi cyborgs that many of us have seen on TV, we kind of fit the mould. Sci-fi cyborgs have seemingly unlimited amounts of memory that is photographic. They can quickly look up information from the all of humanity’s knowledge, and they have ways to easily communicate with others.

Check, check and check. We can do all of that with our smartphones. Gone are the days of looking something up in a book (heaven forbid!) Need to navigate somewhere? No problem, you can whip out your phone. Need to look something up? No problem, pick up that black mirror!

Personally, I’ve tried to ditch my smartphone, but it was complete failure – I’m just too reliant on it. I’m not even that bad compared to many people. According to Apple Screen Time, I spend approximately 1.5 hours per day on my smartphone. That’s around half of the national average in the UK.

Are we cyborgs?

After listening to Elon and his justification for why he thinks we are cyborgs, I have to say I agree. What do you think, are we cyborgs or is this a little too far?

If you want to watch the entire interview, you can do so below:

How To Backup Nextcloud

I recently wrote a guide on how to setup your own Nextcloud server; it’s a great way of ensuring your personal data is kept private. However, it’s also important to backup Nextcloud too.

Isn’t Nextcloud My Backup?

No it isn’t. Nextcloud is not a backup solution, it’s a way of syncing your data, but it’s not a backup. Think about it, if you delete a file from computer A, that deletion will immediately be synced everywhere via Nextcloud. There are protections in place, such as the trash bin and version control, but Nextcloud is not a backup solution.

Since building my own server I have come up with a pretty decent way of backing up my data that follows the 3-2-1 principle of backing data up.

At least 3 copies of your data, on 2 different storage media, 1 of which needs to be off-site.
— The 3-2-1 backup rule


In order to effectively backup Nextcloud, there are a few pieces of hardware and software involved. There is an initial cost to the hardware, but it isn’t significant.

To backup Nextcloud you will need:

  1. An Ubuntu based server running the Nextcloud Snap
  2. A USB hard drive that is at least double the size of the data you’re backing up (I’d recommend getting the biggest you can afford)
  3. Duplicati backup software installed on your Nextcloud server
  4. A Backblaze B2 account
  5. Around 30-60 minutes to set it all up

At this point I will assume that you have connected and mounted your USB hard drive to the server. If you haven’t done that yet, take a look at my guide on how to mount a partition in Ubuntu.

!!! Note: this process is designed around the Nextcloud Snap installation, not the manual installation.


Once you have completed this guide, you will be able to do the following:

  1. Automatically backup your entire Nextcloud instance (including your database) every day
  2. Create a log file so you can see if the backup worked
  3. Sync the backup to B2 cloud storage (it will be encrypted before transmission)
  4. Delete old backups so your hard drive doesn’t fill up
  5. Receive email alerts once the backup completes

User Setup

For this guide we will be using a dedicated user for backing up. This will allow us to keep the backup routine separate from the normal user account you use for security purposes.

In this guide, I will be using ncbackup as the user account. For security reasons, I would recommend using a different username. Let’s start by creating the user and the directories we will need to store our backups.

# Make new user
sudo adduser ncbackup

# Switch to new user account
su - ncbackup

# Make directories for Backups
mkdir Backups
mkdir Backups/Logs

# Logout to switch back to normal user

Now we have the directories we need in place, let’s create the script that will run our backups. In this example, I’m using nano, but feel free to use any text editor you like. To learn more about nano, click here.

nano /usr/sbin/

Backup Nextcloud

Populate the file with the following, ensuring you change all of the values in red to the appropriate values for your setup.


# Output to a logfile
exec &> /home/ncbackup/Backups/Logs/"$(date '+%Y-%m-%d').txt"

echo "Starting Nextcloud export..."

# Run a Nextcloud backup

echo "Export complete"
echo "Compressing backup..."

# Compress backed up folder
tar -zcf /home/ncbackup/Backups/"$(date '+%Y-%m-%d').tar.gz" /var/snap/nextcloud/common/backups/* 

echo "Nextcloud backup successfully compressed to /home/ncbackup/Backups"

# Remove uncompressed backup data
rm -rf /var/snap/nextcloud/common/backups/*

echo "Removing backups older than 14 days..."

# Remove backups and logs older than 14 days
find ./home/ncbackup/Backups -mtime +14 -type f -delete
find ./home/ncbackup/Backups/Logs -mtime +14 -type f -delete

echo "Complete"

echo "Nextcloud backup completed successfully."

Now we need to make our backup script executable:

sudo chmod +x /usr/sbin/

A lot of the commands in our script will require sudo access, but we don’t want to give full sudo access to our ncbackup user, as it doesn’t need elevated rights globally. However, we do want to be able to run the backup script with sudo rights, and we want to do it without a password.

To accomplish this, we need to use visudo. Adding the lines below to visudo will allow the ncbackup user to run the backup script as sudo, without a password. The user will not be able to run anything else as sudo.

sudo visudo

Add the following lined to the end of the visudo file:

# Allow ncbackup to run script as sudo
ncbackup ALL=(ALL) NOPASSWD: /usr/sbin/

Enabling sudo access for the backup script introduces another potential security risk. The ncbackup user can run the backup script as sudo without a password. So a threat actor could potentially edit the script and run any command as sudo without a password.

However, we saved the script in /usr/sbin, which means the ncbackup user will not be able to edit the script. By doing so, we have prevented the system from becoming insecure.

As an extra layer of security, we will stop the ncbackup user from being able to login to the server at all:

sudo usermod -s /sbin/nologin ncbackup

If at a later date you need to be able to login using the ncbackup user, you revert this by running the following command:

sudo usermod -s /bin/bash ncbackup

Schedule Backups

Now have the backup script setup, we need to schedule the backup to run automatically; for this, we will use Cron.

Run the following command to enter the Cron setting for the ncbackup user:

sudo crontab -u ncbackup -e

Once you’re in crontab, you need to add the following lines to the bottom of the file:

# Nextcloud backup cron (runs as 2am daily)
0 2 * * * sudo /usr/sbin/

The settings above will make the backup run at 02:00am every day. You can change this to whatever value you like, but I would recommend running the backup every day.

The first value represents minutes, then hours, then days etc. So if you wanted to run the backup at 03:30am, your Crontab entry would look something like this:

# Nextcloud backup cron (runs as 03:30am daily)
30 3 * * * sudo /usr/sbin/

Now Wait…

That’s most of the setup complete at this point. The next thing to do is to wait 24 hours for your backup to complete automatically (or you could run the script manually yourself).

Once the script has run, within your backups folder, you should see a tar.gz file that has a name which corresponds with the date the backup ran:

kev@server:~$ ls /home/ncbackup/Backups/
2019-07-03.tar.gz  Logs

Within the Logs folder, you should also see a <date>.txt file that corresponds to the backup. You can open this to see how your backup went:

kev@server:~$ cat /home/ncbackup/Backups/Logs/2019-07-03.txt 
Starting Nextcloud export...
WARNING: This functionality is still experimental and under
development, use at your own risk. Note that the CLI interface is unstable, so beware if using from within scripts.

Enabling maintenance mode...
Exporting apps...
              0 100%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/1)
Exporting database...
Exporting config...
Exporting data...
         15.90M 100%  109.87MB/s    0:00:00 (xfr#105, to-chk=0/139) 

Successfully exported /var/snap/nextcloud/common/backups/20190703-130201
Disabling maintenance mode...
Export complete
Compressing backup...
tar: Removing leading `/' from member names
Nextcloud backup successfully compressed to /home/ncbackup/Backups
Removing backups older than 14 days...
find: ‘./home/ncbackup/Backups/’: No such file or directory
Nextcloud backup completed successfully.

With the echo statements we put in the script, you can see at what point in the backup things failed, if they do in fact fail.

!!! Note: there are masses of improvements that can be added to this script, but this satisfies my needs. If you do add improvements, please feel free to post a comment below.

Setup Duplicati

You now have a single layer of backups for Nextcloud. However, if you want to abide by the 3-2-1 rule of backups (which I highly recommend), then we now need to use Duplicati to add additional layers to our backup routine.

To install Duplicati, go to this link and right click ‘copy link location‘ on the Ubuntu DEB. Then amend the commands below as appropriate.

sudo dpkg -i duplicati_[version].deb

# If you get a dependency error, run the following
sudo apt --fix-broken install

We now need to enable the Systemd service, and configure it so Duplicati runs automatically on boot:

# Enable Duplicati service
sudo systemctl enable duplicati

# Start the Duplicati service
sudo systemctl start duplicati

By default the Duplicati service will only listen on localhost, so if you try to access the IP of the server from another device, you won’t get the Duplcati webGUI.

To fix this, edit the DAEMON_OPTS option within the Duplicati default config to the following:

sudo nano /etc/default/duplicati

# Additional options that are passed to the Daemon.

Restart Duplicati so the config changes take affect:

sudo systemctl restart duplicati

You should now be able to access the Duplicati web interface by going to http://server-ip:8200. You will be asked to set a password for Duplicati when you first login, make sure this is a strong one!

!! Security Note: My server is hosted at home, and I don’t expose port 8200 to the internet. If your server is not at home, then I would strongly suggest you configure something like IP Tables, or Digital Ocean firewall, to restrict access to port 8200.

Configure Duplicati Backups

Now its time to configure our backups in Duplicati. We will configure 2 backup routines – 1 to USB and another to Backblaze B2 for off-site.

Let’s do the USB backup first. Within the Duplicati webGUI, click on the Add Backup button to the left of the screen.

This is a very straightforward process where you choose the destination (our USB drive), the source (the output from our backup script) and the schedule.

When creating your backup routines in Duplicati, always ensure you encrypt your backups and use a strong passphrase.

Also, always make sure your Duplicati backups run at different times to your other backups. Personally, I go for the following setup:

  • 02:00 – Local Nextcloud backup script runs via Cron
  • 03:00 – Duplicati backs up to USB
  • 04:00 – Duplicati backs up to Backblaze B2

I always leave the Backblaze backup to run last, as it then has up to 22 hrs to complete the upload before the next backup starts, so they shouldn’t interfere with one another.

When it comes to configuring your Backblaze backups, change the destination from Local to B2 Cloud Storage. You will need your B2 bucket information and application keys from to complete the config.

Once you have entered your Backblaze information, click Test Connection to make sure Duplicati can write to your B2 bucket correctly.

!!! Important note: You will need to add payment information to your Backblaze account before backing up, otherwise your backups will fail.

To give you an idea of what Backblaze costs, I’m currently backing up around 50GB of data with them, and I’m charged less than $1/month. Actually, it’s close to $0.20/month.

Personally, I only keep 7 days of backups on BackBlaze, as I only have it as a break-glass kind of scenario, where all my local backups have failed. I don’t need data retention in the cloud, that’s what my USB drive is for.

Duplicati Email Notifications

You can configure email notifications for Duplicati backups, this way you will always know if your backups are working as needed.

To do this, click on the Settings option to the left of screen, scroll all the way down to the bottom where it says Default options. Click the option that says Edit as text, the paste the following into the field:

# Change as needed

--send-mail-subject=Duplicati %PARSEDRESULT%, %OPERATIONNAME% report for %backup-name%
--send-mail-from=Backup Mailer <>

I personally use Amazon’s SES service for this, but you should be able to use any SMTP server.

That’s It!

You’re done. That’s it. Finito. You now know how to backup Nextcloud in such a way that it abides by the cardinal 3-2-1 backup rule, and it lets you know when your backups have run.

TEST YOUR BACKUPS! I can’t stress this enough. Once your backups have been running for a few days, make sure you run a test restore (not on your live system) to make sure they can all be restored. Afterall, there’s no point in having backups if you can’t restore from them!

To restore the back you have made of Nextcloud into a vanilla Nextcloud snap installation, you need to decompress your backup to /var/snap/nextcloud/common, then run the following command:

sudo nextcloud.import /var/snap/nextcloud/common/backup-to-restore

Yes, restoring your backup really is that simple!


This is by no means the perfect way to backup Nextcloud, but it does work and it has worked for me for quite some time now. You may have a different/better way of backing up, if you do, please let me know in the comments below.

Finally, I’d like to thank my friend Thomas from work, who helped improve my script a little, and gave me a couple of ideas to improve to the security.

Thanks, Tom. 🙂

How To Install Nextcloud On Shared Hosting

After writing my Nextcloud setup guide, people have asked me whether they can install Nextcloud on shared hosting, like cPanel.

Some people don’t want to pay for an additional server to host Nextcloud. That doesn’t mean they should be left out in the cold and having to forego their privacy by relying on 3rd party tools, like Dropbox, Google Drive and Microsoft’s OneDrive.

In this post I will show you how you can install Nextcloud on shared hosting platforms, like cPanel and Plesk.

Why Install Nextcloud On Shared Hosting?

There are many of reasons why someone would want to install Nextcloud on shared hosting, some of these may be:

  • Shared hosting is cheap
  • Shared hosting comes with lots of storage
  • You don’t have to administer the server
  • Backups are very simple

The best part of all this is that the process is actually really simple! If you don’t already have a shared hosting package, I can personally recommend both Unlimited Web Hosting and NameCheap.


The first thing you need to do is log in to your shared hosting account (in this example I will be using cPanel) and create a database. Nextcloud requires a database to store all the administrative data.


Find the MySQL Database Wizard within your cPanel account, click it and follow the on-screen instructions to setup the database. Make sure you note down the database name, user and password as you will need those later.


Now we have create the database, we need to setup a domain, or sub-domain, for the Nextcloud instance to use.

Again in cPanel, go to the Subdomains section and add the subdomain you wish to use. In this example, my subdomain is

Note: Both cPanel and Plesk support free TLS certificates; make sure you configure this so your Nextcloud domain is using HTTPS before you run the web installer.

Install Wizard

Now we have the domain, database and TLS certificate configured, it’s time to run the Nextcloud web installer.

  1. Right-click here and save the file to your computer
  2. Upload setup-nextcloud.php to the directory you specified for the Nextcloud domain
  3. Point your web browser to

Click Next on the welcome screen to get started with the install wizard.

The installer will then run a dependency check to make sure your shared hosting account has everything it needs for Nextcloud to work.

You will also be asked which directory you want to install Nextcloud in. By default this will be Personally, I change this field to be a single dot, so that Nextcloud is installed to the top level of the domain.

That way, you don’t need to go to /nextcloud every time you want to visit your instance.

After a minute or so, you should see a message saying the installation was successful.

Click next again to configure your admin account, then click on the down arrow next to where it says Storage & Database.

Leave the data folder unchanged, select MySQL/MariaDB and enter the database details you noted down earlier.

Finally, click Finish Setup and after a minute or so, you should be greeted with the files interface for your brand new Nextcloud instance.

If you decide to install Nextcloud on shared hosting, I would strongly recommend enabling Server-Side Encryption. This will protect your data in a shared environment. You can enable it from Settings > Administration > Security.


Congratulations, you just installed Nextcloud on shared hosting; I told you it was easy!

You can now start familiarising yourself with the Nextcloud interface and all it has to offer. You can also expand the functionality of Nextcloud by installing apps.

How To Mount A Partition In Ubuntu

If you want to mount a partition in Ubuntu, all you really need to do is plug the drive into your machine and open your file browser. However, what if you want to ensure that partition always mounts on boot?

Well, fear no more intrepid Ubuntu explorer, because I have you covered. This quick and simple guide will show you how to mount a partition in Ubuntu and ensure it’s always mounted automatically on boot.

Step One: Recon

First things first – we need to get some basic information about the partition, like the drive it sits on, before we start configuring the mount point.

Run the following command in terminal to work out which partition/drive is the one you want to mount.

lsblk -o name,mountpoint,label,size,fstype,uuid | egrep -v "^loop"

This command will show all the devices and partitions that your user can see on the system. We’re using the egrep command to remove all squashfs partitions that snap packages use. This way the output is much cleaner.

The output should look similar to the image below. I am going to be mounting the NTFS partition that’s on sdb1 (show below in the red box).

Step Two: The Setup

We can see from the output above that the drive I’m trying to mount (in my case a USB HDD) is already mounted at /media/kev/TOSHIBA EXT, but I want to ensure he drive always mounts at /media/backups.

Create the directory that will be used as the mount point for the partition:

sudo mkdir /media/backups

Next, we will edit the fstab file which contains all the mount point configuration for the system.

sudo nano /etc/fstab

You need add a line to the bottom of the fstab file that contains the config for the partition you want to mount. That line should look something like this:

# Mount point for my USB backup drive
UUID="843671A53671993E"  /media/backups  nfts  defaults  0  2

Note the UUID and the part that says ‘ntfs’. The partition I’m trying to mount is NTFS, so I have entered ‘ntfs’ to my fstab entry. However, if you’re using a different filesystem, for example EXT4, you would put ‘ext4’ here.

The UUID was printed out when we did the recon earlier. It’s the long hexadecimal string. These are different lengths depending on the filesystem type.

Finally, you need to mount the partition at the new location. That’s just a single command:

sudo mount -av

If you run the ‘lsblk’ command again, you should now see your partition mounted at the new location. If you don’t, reboot your machine.

Step 3: Set Permissions

You may not have to do this step, but I thought I’d include it just in case. You may find that when you browse your newly mounted partition, that you can’t write anything to it. Let’s fix that.

You will need to take ownership of the partition with the ‘chown’ command, then change the permissions with the ‘chmod’ command. In this case we will set the permissions to 755.

If you want to learn more about Linux filesystem permissions, this is a great guide.

sudo chown -R user:group /mount/point/path
sudo chmod -R 755 /mount/point/path

Be sure to update the parts of the above commands show in red to suit your system. In most cases, your user and group names will likely be the same. For me, the commands look like this:

sudo chown -R kev:kev /media/backups
sudo chmod -R 755 /media/backups

That’s it!

If you want to mount a partition in Ubuntu, the process really is this simple. Whilst most long-term Linux users will likely know this process like the back of their hand, this can be a very frustrating process for new users.

This process can be useful for all kinds of things, but really comes into its own when you want to permanently reference a partition, like setting up backup routines on a server. I know it helped me when setting up my home server.

How To Setup A Nextcloud Server In Ubuntu

If you’re a regular reader of this blog, you will know that I recently built a Nextcloud server at home after I nearly lost all of my data. I’ve learned a lot about Nextcloud since the build, so thought I’d write a setup guide.


This guide assumes a certain level of technical ability. If you are not comfortable administering your own server, then I would suggest you pay someone else to host your Nextcloud server. I can personally recommend OwnCube.

As well as some technical knowledge, you will also need access to a LEMP server (Linux, Nginx, MariaDB & PHP). If you don’t have one, I’d recommend this guide by LinuxBabe.

You do NOT need to install the LEMP stack if you intend to use the Nextcloud snap package.

Finally, you will need a domain name, or sub-domain, that has the A record pointing to your Nextcloud server’s public IP.

Snaps – The Simple Method

If you don’t want to install LEMP and Nextcloud manually, you can opt for the simple method, which is the snap package.

The snap package is basically a Nextcloud server all bundled up in a nice little bucket that’s ready to go. However, you may have some issues later on when it comes to backing up your data. This is because snap packages are segregated from the rest of the OS, so your user account will not have access to the Nextcloud data by default.

If you want to get your Nextcloud server up and running quickly, with very little ongoing admin, then the snap package is worth considering. The snap will also update automatically as new versions are released.

You will also need to configure Let’s Encrypt to ensure that your Nextcloud server is using HTTPS, and therefore protecting your credentials (among other things) when logging in.

If you’re interested, this is why HTTPS is important.

To install the Nextcloud snap and configure Let’s Encrypt, you will need to run the following commands:

sudo snap install nextcloud

# Enable HTTPS via Let's Encrypt
sudo nextcloud.enable-https lets-encrypt

# Add your domain to the trusted domains
sudo snap run nextcloud.occ config:system:set trusted_domains 1 

# Restart the snap to apply the changes
sudo snap restart nextcloud

Remember to update to reflect your own domain.

The snap package is a quick and easy way to getting Nextcloud up and running, but I wanted more control. So let’s take a look at how we install and configure Nextcloud manually.

Nextcloud Server Manual Installation

The first thing we need to do is get the download link for the Nextcloud server package. At the time of writing this guide, the latest version of Nextcloud is 16.0.1.

Go to this link, right click on the blue Download button and select Copy link location. Then let’s head to our server and start work:


# Install unzip and extract the Nextcloud package
sudo apt-get install unzip
sudo unzip -d /usr/share/nginx/

The Nginx user (www-data) needs to be given ownership of the Nextcloud directory, and everything within it, so Nginx can write to the Nextcloud folder:

sudo chown www-data:www-data /usr/share/nginx/nextcloud/ -R

Setup A Database (MariaDB)

Nextcloud requires a database to store administrative data. I personally went with MariaDB, which is a fork of MySQL.

# Connect to MariaDB server
sudo mariadb

# Create a database & user
create database nextcloud;
create user nextclouduser@localhost identified by 'some-password';

# Grant the user all privileges to the Nextcloud database
grant all privileges on nextcloud.* to nextclouduser@localhost identified by 'some-password';

# Flush privileges and exit
flush privileges;

Configure Nginx

At this point, we have the Nextcloud files copied over and a database ready and waiting. It’s now time to configure Nginx so Nextcloud actually works.

Let’s start by creating a Nextcloud config file, so Nginx knows what to do with requests that are destined for our Nextcloud URL.

sudo nano /etc/nginx/conf.d/nextcloud.conf

Now paste the following into the file, editing the parts in red as needed. If you don’t know how to use Nano, this link might help.

server {
    listen 80;

# Add security related headers
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;

# Path to your Nextcloud folder - can be optionally changed
    root /usr/share/nginx/nextcloud/;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;

# Card and Cal DAV redirects
    location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
    location = /.well-known/caldav {
       return 301 $scheme://$host/remote.php/dav;

# For Let's Encrypt challenges
    location ~ /.well-known/acme-challenge {
      allow all;

# Disable gzip
    gzip off;

# Specify paths to error pages
    error_page 403 /core/templates/403.php;
    error_page 404 /core/templates/404.php;

# Redirect bare domain to the index.php file
    location / {
       rewrite ^ /index.php$uri;

# Block some stuff because, you know, bad guys
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
       deny all;
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
       deny all;

# Here be FastCGI dragons
    location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
       include fastcgi_params;
       fastcgi_split_path_info ^(.+\.php)(/.*)$;
       fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
       fastcgi_param PATH_INFO $fastcgi_path_info;

# Avoid sending the security headers twice
       fastcgi_param modHeadersAvailable true;
       fastcgi_param front_controller_active true;
       fastcgi_pass unix:/run/php/php7.2-fpm.sock;
       fastcgi_intercept_errors on;
       fastcgi_request_buffering off;

    location ~ ^/(?:updater|ocs-provider)(?:$|/) {
       try_files $uri/ =404;
       index index.php;

# Let's cache all the things
    location ~* \.(?:css|js)$ {
        try_files $uri /index.php$uri$is_args$args;
        add_header Cache-Control "public, max-age=7200";

# More security related headers (these are supposed to be duplicates of the ones above)
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Robots-Tag none;
        add_header X-Download-Options noopen;
        add_header X-Permitted-Cross-Domain-Policies none;
# Don't log access to assets (optional)
        access_log off;

   location ~* \.(?:svg|gif|png|html|ttf|woff|ico|jpg|jpeg)$ {
        try_files $uri /index.php$uri$is_args$args;
# Don't log access to assets (optional)
        access_log off;

Test the Nginx config to ensure the new config will work as expected. We should see a test is successful message.

sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

# Config is fine, restart Nginx to apply the config
sudo systemctl reload nginx

The final step of the installation process is to install all of the PHP modules we will need for our Nextcloud server:

sudo apt-get install php-imagick php7.2-common php7.2-gd php7.2-json php7.2-curl  php7.2-zip php7.2-xml php7.2-mbstring php7.2-bz2 php7.2-intl

At this point, everything is installed and we should be able to navigate to our Nextcloud URL:

DO NOT configure anything at this point, as the connection is currently insecure.

Configure Let’s Encrypt

Now we have Nextcloud running, it’s time to secure it with a TLS certificate. We will do this using Let’s Encrypt.

# Install Let's Encrypt certbot and the required Nginx plugin
sudo apt-get install certbot python3-certbot-nginx

# Generate the certificate
sudo certbot --nginx --agree-tos --redirect --hsts --staple-ocsp --email -d

Once the certificate has been generated, you will get a warning that says we were unable to set up enhancement ensure-http-header for your server, however, we successfully installed your certificate.

Don’t worry, this is normal and expected behaviour when installing a Let’s Encrypt certificate with Nginx. Let’s fix that by heading back to our Nginx configuration:

sudo nano /etc/nginx/conf.d/nextcloud.conf

We need to tell Nginx to use the Let’s Encrypt certificates, redirect HTTP requests to HTTPS, and add the HSTS header configuration. So let’s change the Nginx config from this:

server {
    listen 80;

# Add security related headers
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;


To this:

server {
    listen 80;
    return 301$request_uri;

server {
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

# Add security related headers
    add_header Strict-Transport-Security "max-age=31536000" always;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;


Let’s test the Nginx configuration again and restart Nginx to apply the changes:

sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

sudo systemctl reload nginx

Back to the browser window and hit the refresh button, we should be redirected to HTTPS and see a valid certificate. Life is good!

Setup A Data Folder

Our Nextcloud server is now more secure, so we can configure the admin account and database. I would recommend changing the Nextcloud data folder, so that our user’s personal data is stored separately from the Nextcloud admin data.

In the example below I called the data folder ncdata, but you can call it whatever you want.

sudo mkdir /usr/share/nginx/ncdata

# Make Nginx user the owner of the folder
sudo chown www-data:www-data /usr/share/nginx/ncdata -R

Whichever path you created and specified above, make sure you put that in the data folder field on the Nextcloud setup page. We also need to enter the database name, user and password that we created earlier.

Once we have entered the admin credentials, data folder path and the database details, we hit the finish button, and hey presto! We’re presented with Nextcloud interface complete with files. Life is getting better!

Congratulations, you have just setup and configured your very own Nextcloud server!

Next Steps

You though we were all done? Nope, there’s more for us to do yet, but we’re nearly there. I promise.

Take this opportunity to have a click around and familiarise yourself with Nextcloud. Once you have done that, click on your avatar (top right corner) and select settings.

From the settings screen, go to Administration > Overview from the right-hand pane. In the overview pane there are likely to be a few warnings in orange text. Let’s fix them, shall we?

Warning 1: PHP not setup properly

PHP does not seem to be setup properly to query system environment variables. The test with getenv(“PATH”) only returns an empty response. Please check the installation documentation for PHP configuration notes and the PHP configuration of your server, especially when using php-fpm.

The solution to this is to enable some environment variables within PHP’s www.conf file.

sudo nano /etc/php/7.2/fpm/pool.d/www.conf

Now look for the section of the file shown in the code below and remove the preceding semicolon from all the lines.

;env[PATH] = /usr/local/bin:/usr/bin:/bin
;env[TMP] = /tmp
;env[TMPDIR] = /tmp
;env[TEMP] = /tmp

Save the file and restart PHP:

sudo systemctl reload php7.2-fpm.service

Warning 2: Small PHP memory limit

The PHP memory limit is below the recommended value of 512MB.

To fix this we need to edit the ‘php.ini’ file to increase the maximum amount of memory that the PHP process can consume. While we’re there, we might as well increase the maximum upload limit too.

sudo nano /etc/php/7.2/fpm/php.ini

Now look for the following values within the php.ini file and change them as needed. You can increase the PHP memory limit above 512MB if you wish.

On my production server, I have the PHP memory limit set to 1GB, as my server has 4GB RAM.

# Default is 128M
memory_limit = 512M 

# Default is 2M
upload_max_filesize = 100M

# Restart PHP service again
sudo systemctl reload php7.2-fpm.service

Restart PHP again to apply the new settings and head back to the admin overview screen. Hit refresh and hey presto! Two of the warnings are already gone. Life is getting better all the time!

Warning 3: MySQL 4-byte characters

MySQL is used as database but does not support 4-byte characters. To be able to handle 4-byte characters (like emojis) without issues in filenames or comments for example it is recommended to enable the 4-byte support in MySQL. For further details read the documentation page about this.

The final error that we need to resolve is to enable 4-byte character support. We can do this by using Nextcloud’s command line utility, occ. Run the commands below to enable 4-byte support.

cd /usr/share/nginx/nextcloud
sudo -u www-data php occ config:system:set mysql.utf8mb4 --type boolean --value="true"
sudo -u www-data php occ maintenance:repair
sudo -u www-data php occ maintenance:mode --off

Refresh the admin overview screen again and we see that all the warnings are cleared. There may be other warnings about caching and referrer policies, but these are nothing to worry about really.

You can configure memcache on your Nextcloud server if you wish, but it’s really not required if your Nextcloud server is designed for personal/family use.

If you notice poor server performance, then you can read about how to enable caching, among other things, in the Nextcloud documentation.


Security is hugely important (duh!), especially since your Nextcloud server is likely to be available on the Internet. Luckily, Nextcloud have provided a scan utility that will check our Nextcloud server and provide a security rating.

The scoring is graded as follows:

  • E = This server is vulnerable to at least one vulnerability rated “high”. It is likely quite easy to break in and steal data or even take over the server.
  • D = This server is vulnerable to at least one vulnerability rated “medium”. With bit of effort, like creating a specially crafted URL and luring a user there, an attacker can likely steal data or even take over the server.
  • C = This server is vulnerable to at least one vulnerability rated “low”. This might or might not provide a way in for an attacker and will likely need some additional vulnerabilities to be exploited.
  • A = This server has no known vulnerabilities but there are additional hardening capabilities available in newer versions making it harder for an attacker to exploit unknown vulnerabilities to break in.
  • A+ = This server is up to date, well configured and has industry leading hardening features applied, making it harder for an attacker to exploit unknown vulnerabilities to break in.

As you can see, our server gets an A+, so we know we’re pretty secure. Life is really good! 🙂

That’s It

If you have made it this far you should now have a secure and private Nextcloud server. Congratulations, you no longer need to rely on services like Dropbox, Google or Microsoft.

From here, I would suggest taking a look at the many apps that Nextcloud has to offer. You can do some awesome stuff like:

  • Add contacts, calendar and mail
  • Add a password manager
  • Sync your bookmarks
  • Sync your phone’s data (Android & iOS)
  • Add an RSS reader
  • And many, many more

I hope this Nextcloud server setup guide was helpful. Who knows, maybe you will be able to start taking control of your privacy and start de-Googling, just like I did.

Improving My Ubuntu Workflow

Ubuntu is great. I’ve been using it since 2009 and I currently run Ubuntu MATE – I even run it on my home server. However, I’m a fan of using the keyboard where possible, so was looking for ways to improve my Ubuntu workflow so I don’t have to reach for my mouse too often.

After a little trial and error, I’ve settled on a couple of really useful applications that have increased my productivity immensely. So much so that I thought I’d write about it on here.

The Default Ubuntu Workflow

The default Ubuntu workflow is very good, especially in Ubuntu MATE, but it is very mouse-centric. I do a lot of typing on my machines (mostly emails), so having to reach for my mouse when I want to do anything outside of my current window can disrupt my workflow.

Terminal is also one of the tools I use often, so needed a way of managing my applications and files, as well as a terminal window, without lifting my fingers off my keyboard.

By installing two applications; Ulauncher and Guake, I managed to achieve what I wanted.


Ulauncher is an application launcher that works with a number of Linux distributions. It allows users to launch applications using a simple shortcut command.

You may think ‘so what? you can do that with the MATE launcher’ and you would be right. However, Ulauncher supports extensions so you can expand its functionality.

For example, I have extensions that will search DuckDuckGo, manage tracks in Spotify and generate passwords. You can see a full list of extensions here.

Hit a predefined shortcut key (ctrl+space by default); Ulauncher will then popup that allows me to search for files, skip tracks, generate password and a whole lot more. All without lifting my hands from the keyboard.

sudo add-apt-repository ppa:agornostal/ulauncher
sudo apt-get update
sudo apt-get install ulauncher


Simply put, Guake is a pop-down terminal. So just like Ulauncher, I hit a specific keyboard command and the terminal window drops down from the top of the screen.

I don’t even need to keep the terminal window open. I hit the shortcut key – in my case tilde (~). Guake pops up and I enter my command. I hit tilde again and the terminal window hides, allowing me to get on with what I was doing while the terminal command chugs away in the background.

If I hit the same shortcut a few minutes later, I can get see the history of what my command was doing.

# Guake is already in the Ubuntu repositories
sudo apt-get install guake


Between Ulauncher and Guake, I’ve managed to really improve my workflow in Ubuntu and at the same time, increase my productivity.

Do you have a novel way of managing your Ubuntu workflow? If so, feel free to tell me about it in the comments below.

My Home Server – 2 Months On

I recently wrote about how I nearly lost all my data, then later I wrote about how I recovered from that by building a new server. The new server has now been in place for a couple of months, so I thought I would give you guys an update.

Not Plain Sailing

I knew that building the new server would be difficult; things were likely to change and it was all going to be a case of trial and error.

If you look at the building my home server post, you will see that I decided to go with Syncthing for file sync, Plex for media, UbuntuMATE as the OS and Cloudberry for the cloud backups.

After a week or so of use, I noticed that I was having stability issues where my server would completely lock up and stop responding. After some investigation, it was clear that the issue was Cloudberry. I removed Cloudberry and replaced it with Duplicati – no crashes since.

Pro tip: don’t waste your money on a Cloudberry license. Install Duplicati instead and donate the license fee to them.

Other changes

Once the system was stable (or so I thought) I happily went about my business assuming all was well in the world. That was until I booted up my laptop one day to find that Syncthing had deleted the entire contents of my Documents folder.

I assumed it was something I had done by accident. At the time I was moving a lot of data around, as I was working through the restoration of various backups. Maybe I had deleted the wrong folder and it had synced? Luckily I had file versioning turned on, so restoring the data was just a couple of clicks.

Weirdly, when I restored the the data, it did so to the root of my home folder, not /Documents. Again, a minor frustration that I could easily work around.

Fast-forward another few days and the same thing happened again, but this time it was my Photos folder. Then again a few days later to my Documents folder again.

I had no idea what was happening here and there was no details as to what was happening in the Syncthing logs. Enough was enough, I couldn’t trust Syncthing so I decided to move to something else.


I had used Nextcloud in the past, but had a number of issues with it, so I went into this with a healthy amount of trepidation.

First time around I installed Nextcloud from the snap package. This was trivial to setup, but the sandboxing was causing issues when it came to backing up. So I ended up installing Nextcloud manually (guide coming soon).

I’m happy to report that since I last used Nextcloud, the syncing functionality has improved a lot. I haven’t had any issues with file syncing at all. However, that’s all I’m using it for, so I can’t comment on how the other Nextcloud apps perform.

I installed the Nextcloud app on my iPhone and all my photos are backing up perfectly. I just need to remember to open the Nextcloud app once in a while, as it doesn’t seem to check for new files in the background. That’s not a big deal though.

The Result

I’ve since bought a cheap Dell monitor from Amazon, just so I didn’t have to keep the old TV connected. I’ve also been successfully backing up my data to both a local USB and Backblaze B2 every night.

Duplicati is configured to send me backup reports via email (guide coming soon on that too), so I know if anything has failed. I’ve also done test restores from both USB and Backblaze; both of which were successful.

You may have noticed that I haven’t mentioned Plex during this entire post. That’s because it has been faultless; you just set it and forget it. Yes, it’s proprietary software, but it works really well.

I’ve now been running Plex, Nextcloud, Duplicati and Backblaze B2 together for around 6 weeks without issue.

My files are synced everywhere I need them. The photos I take on my phone are automatically synced to the server. I can stream movies and music with ease on all my devices and everything is backed up to multiple locations.

The Cost

Building my new server has been a lot of work, but it really has been worth it. I now have a server that should last me for years to come and as a bonus, it has cost me a fraction of what my Synology did.

The final costs in both time and money look something like this:

  • Build time (inc. issues & research) – approx. 24 hours
  • Hardware (case, CPU, PSU, RAM, SSD) – £290
  • Dell monitor – £20
  • Monthly B2 charges (approx 20GB) – £0.08

If you’re thinking about running your own server at home, I’d strongly recommend it. However, be prepared for lots of research and troubleshooting in the initial build stages.

Are you running your own server at home? I’d love to hear about your setup in the comments below.

Privacy vs “I have nothing to hide”

I wrote an article a while ago about why I’m ditching Android. That article got numerous comments asking why I was so concerned about privacy and asking what I have to hide.

I want to take some time in this article to explain why each and every one of us has something to hide and should probably take privacy seriously.

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.
— Edward Snowden

Privacy vs Security

I wouldn’t want the world to know the details of the text conversations I have with my fiancée. We’re not discussing anything illegal or doing anything wrong, but I’d prefer to keep those conversations out of the public eye. Being private is different from being secure; privacy is a right, security is a choice.

A lack of privacy tends to lead to a lack of candidness. If there’s one person in the world I should be able to be candid with, it’s my fiancée, but I wouldn’t be able to be candid with her if I thought our conversations were not private.

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
— Article 12 of the Universal Declaration of Human Rights

Data Privacy

Private text messages aside, who really cares about data privacy, right? If your photos, contacts, calendar, email, browsing history, search history, musical tastes, files, thousands of status updates, likes, shares and physical movements are all in the cloud, who really cares?

Please read that last paragraph again and let it sink in – that is probably more data than your nearest and dearest have about you. Yet generally speaking, people don’t seem to be concerned that such volumes of data are out there and being used without our consent.

PayPal’s terms and conditions are longer than Hamlet. The vast majority of people will not have the time, or inclination, to read and decipher thousands of words in legalese to work out where their data is going. Ipso facto, this data is being shared without our consent, regardless of whether we have accepted the terms and conditions or not.

If someone came up to you in the street, said they’re from an online service provider and requested you store all of the above data about you with them, I imagine for many the answer would be a resounding NO!

So why should Google, Facebook, Apple or anyone other tech giant be any different? You could argue that they’re more secure, as they’re bigger. Which is probably true, but we established earlier that privacy and security are two different things.

Plus, being bigger often makes them a bigger target. Just look at the celebrity iCloud leaks – this is neither secure or private. It’s also indicative of the risks people run storing their most intimate data with 3rd parties.

Location tracking

The image below shows my movements on Google Maps from 2014. I had no idea I was agreeing to be tracked with such veracity. The details show exactly where I went and when during a day off when I was running errands.

Created with GIMP

This may seem fairly benign to most, but what if Google were compromised and this data was leaked? An attacker would be able to pattern exactly where I go, when I’m likely to be out of the house and places I frequent.

Want to burgle my house? Well, you would know exactly where I live and when I’m likely to be out in order to do so. Winning!

The Future

Threat actors aside, tech giants can (and do) use this data to build up an extremely detailed profile of us all. They can use this to serve tailored adverts, or worse still, sell this data on to 3rd parties.

That’s bad enough already, but what about 10 years from now? What if this trend continues and our privacy erodes further? What if Google strike a deal with a spurious 3rd party and sends all your data to them? Everyone remembers Cambridge Analytica, right?

Or what about your posts on social media? Remember the student who was fired from KFC for pretending to lick a tub of mash potato before posting a picture on Facebook?

Funny, right? But this picture will haunt this person for the rest of her life. She will always be that girl who was fired from KFC for licking the mashed potato. That’s out there. Forever.

What is the answer?

The answer is simple – stop using these services and look for privacy respecting alternatives where possible. They may not be free services, but at least you know the business model of the company isn’t to sell your data if you’re paying for the product you’re using.

The Privacy Tools website has some great advice and examples of many privacy respecting tools that you can use. You can also look at De-Googling your life.

What’s the point? You can’t be truly private online!

True, but why should that stop us? Just because it’s almost impossible to be completely private on the Internet, doesn’t mean we should stop trying. Even if we only manage to protect a fraction of our data, it’s a fraction less data that the tech giants don’t have on us.

Some people reading this may not care about their privacy, like the person who commented above. That’s fine, we’re all entitled to our own opinion. But if you do feel that way, I would ask that the next time you’re talking to a privacy advocate, please don’t assume they are executing their right to privacy because they have something to hide. They’re likely doing it just because it’s their right to do so.

My opinion could be completely wrong with all this, but I wanted to get my point of view out there in a longer form. I really hope the tech giants start respecting their user’s privacy, but until they do, I will carry on trying to protect mine where possible.

Building My Home Server

I recently wrote about how I nearly lost all my data. Honestly, it was a blessing in disguise as I now have a far more powerful and capable server to store and backup my data.

After a tonne of research and a few trips to the post office to return hardware, I think I now have a setup that I’m happy with. In this article I’m going to take you through my new setup and what it cost me.

The requirements

Let’s start with a simple list of what I need the new server to do. I had numerous single points of failure in the old system, so the new setup needed to fix that.

Here’s what I needed:

  • Good performance
  • Local backups
  • Off-site backups
  • Media streaming
  • File syncing
  • Surge protection

The new hardware

Initially I decided to go with a simple ODRIOD device. My old Synology only had 512MB RAM and a dual core ARM CPU, so even the ODROID was a significant upgrade.

However, after some initial testing I decided the ODROID wasn’t for me as the one I bought was designed to be on a headless setup and I wanted a GUI to make admin simple.

Attempt #2

Second time around I decided to not be cheap and stump up the cash for a decent system. Doing so would mean I have more options open to me; the server will likely be in production longer and it will give me more flexibility overall.

So I went shopping again and bought an ITX rig with the following specs:

  • AMD A8-9600 Quad-core 3.1 GHz CPU
  • 4GB DDR4 RAM
  • 120GB SSD for my root partition
  • 512GB SSD for my home partition
  • 1TB HDD for media (Synology donor)
  • 1TB USB HDD for local backups (Synology donor)

The software

I chose UbuntuMATE with the minimal install option as the base OS. Some people may think having a GUI on a server is a waste of resources, but I like to have a GUI that I can login to.

Plus, the server has more than enough resources to cope with the “demands” of a GUI. If lack of RAM becomes an issue, I’ll install another DIMM.

To carry out the various tasks that I need the server to accomplish, I went with:

The good

For the most part the new server works really well. Syncthing is an absolute gem of an application and I’ve had no issues with it.

Plex is also great. I already had a Roku box in my living room, so installing a Plex server seemed like a no-brainer to me. Again, no real issues there either

Cloudberry is basically a graphical front-end for Duplicity, but it is well integrated with BackBlaze so I use this for both my local and off-site backups.

Performance is also really good. I just checked the system resources whilst a backup was running and streaming a movie from Plex. I’m using around 20% of the CPU and 1.5GB of RAM – plenty of burst resources if I need them!

The not so good

The only thing I’m really struggling with at the moment, is photo backups from my smartphone. Had I been using Android I could simply setup Syncthing to backup my photos, but I recently ditched Android.

I’m currently using iCloud to backup my photos, but it’s not ideal as they’re not synced with my other devices and I don’t really want my photos sitting in an Apple data center.

To get around this I’m thinking about buying a Plex Pass, which includes mobile photo syncing. But I’m still getting to grips with Plex, so I haven’t done so yet.

The cost

I’ll start by putting things in to perspective – my Synology plus the 4x1TB hard drives were approximately £500 ($650) when I bought it 5 years ago.

The cost of the new rig is broken down as follows:

  • ITX hardware – £300 ($392)
  • Cloudberry – £23 ($30)
  • Backblaze B2 – Approximately £5 ($7) per year
  • Total cost: £328 ($430)

The cost was obviously reduced as I was able to re-purpose my 1TB drives from the Synology for use with this project. However, even if that was not the case, I’d still have a much more powerful and versatile setup for around the same cost.

Is this the end?

So I now have a pretty powerful server that’s streaming my media, syncing my data and backing it up to multiple locations.

I do need to make a decision on what I’m going to do about my photo backups. That’s likely to either be a Plex Pass, or I’ll add a Nextcloud instance to the server – I’m not 100% sure what I’m going to do yet, but I have multiple options.

Overall this has been a great learning experience for me, and I think I now have a pretty safe routine, where my data is safe under most circumstances.

Oh, I also have a surge protector now too! 🙂

Are you rolling your own server? If so, I’d love to hear what your setup is in the comments below.

I Nearly Lost All Of My Data!

Picture the scene – I’m in the office doing some work on my personal laptop, and all of a sudden my Synology Drive system tray icon says it can’t connect to my server. At first I wasn’t concerned as this kind of thing happens from time to time – probably just an ISP issue at home, I thought.

Fast forward a few hours; I go home to find my Synology NAS was powered off. Our cleaner had been in that day, so I assumed she had probably just unplugged it by mistake. So I try to power it back on – nothing. The device was comepletely dead.


My Synology has 4 x 1TB disks in a RAID 5, which also backs up my most important data to a 1TB external USB drive every evening – I wasn’t concerned about my data. What I was worried about was having to potentially stump up the cash for a new Synology, as they’re not cheap.

Ok, so I unplug the USB drive, take it upstairs and hook it up to my desktop – I needed to start pulling the data off this thing so I have a backup of my backups.

I plug it in, switch it on, and the disk doesn’t spin up. Just like the Synology, I was getting absolutely nothing from the USB drive either.

Double shit!

At this point I’m really worried. You see, I cancelled my off-site Amazon Glacier backups around 6 months ago. What are the chances of both a 4 disk RAID failing AND a USB drive at the same time? Not likely, I thought. Boy was I wrong.

My Synology stored all of my data, all of my partner’s data, all of our videos, pictures, music etc. plus a tonne of other stuff, like website backups. Problem is, only my data and my partner’s data backed up to the USB. Just the crucial stuff.

By this point I’m REALLY worried, but I have 1 saving grace – maybe this is a problem with the enclosures and the disks are fine. I hit the Internet looking for answers.

The Answer?

After lots of research, I finally come across this article on Synology’s knowledgebase. It’s gotta be worth a try, I though. However, my desktop didn’t have enough SATA ports to mount all 4 drives.

I hit Amazon Prime, order a SATA card for next day delivery, then spend the next 24 hours panicking.

Fast forward 24 hours, I get home from work and my Amazon Prime box is waiting for me. I rush up stairs, setup the drives and run through the Synology restore guide.

I hit the power button on the desktop. Please work, please work, please effing work! I’m saying to myself over and over. I hear all 4 drives start to spin up – we’re half way there, folks!

Twenty agonising minutes later I have the LVM RAID configured on my machine and I can browse the entire array. Holy shit, IT WORKED!

I don’t know what happened for sure, but I think it may have been a power surge that fried the boards on both the Synology and the USB, as they were plugged in to the same socket.

The Aftermath

I’m not out of the woods yet – I still have a tonne of data to pull off these drives. My biggest problem at this point was that I disn’t have anywhere to store all the data.

It’s now 6 days since my NAS & USB backups crapped out on me, and I’m STILL restoring data from the drives. However I’m around 95% of the way there now, and I hope to have it all finished this evening.

I have my data back…phew! But now my network and data are a complete shambles. I have bits of data spread over a number of USB drives that I cobbled together, as I have no NAS.

It’s clear that I not only need to replace my old solution, but I also need to come up with a more robust one too. I have been doing a tonne of research on what to do next, and I think I now have a plan. So keep an eye out on this blog for part 2 of this story, where I will hopefully be able to tell you that I’ve fully recovered and that I have a more robust solution in place.

Do you have any data loss horror stories? Please make me feel better and tell me yours in the comments below.