Do you use NAS?

@doobs, I’d be struggling to fit very much on your NAS as described!!! :stuck_out_tongue:

Terrabyte drives, maybe?

Possibly :wink:

I’m not using a NAS right now, but I’m seriously thinking about it, as my disk usage has been exploding since I got a new 24 megapixel camera. I’m eating about 164GB a year in photos, and way more in video. Right now I have a home-made server with a 4TB drive and it’s about filled up. My workstation is similarly getting filled with those pictures, but I still have a while to go there, especially since I have the option of using Darktable’s “local copy” functionality.

I got a 8TB drive that I yanked in a friends’ server that serves as an offsite backup as well. All this is synchronized everywhere with git-annex, which is a little hard to use, but has served me very well over the years.

Still, I’ve been looking at NAS options, especially since I’ve replaced a part of that server. I just got a Vero 4k to serve as a home-cinema/set-top box, but it has basically no storage whatsoever. So I either need to add yet another external drive that dangles off of it, or try to fix the problem in the longer term and centralize storage in one unit, with multiple (possibly redundant) drives.

I’ve looked at many options. I know that Synology make good products, but I find them a tad expensive and I would prefer something that runs Linux and free software and preferably open hardware as well. This leaves me with the following options:

Then there’s also the option of building your own. The problem with that is you end up with a “tower” setup with hard drives hidden deep inside a machine: very few cases have “trays” that allow swapping drives in and out easily. That’s why I am thinking of getting a “real” NAS: each drive gets its tray, with a little light that tells you when it’s time to change it. There might be trays (like the Orico) which could enable this in a custom-made setup, but at that point, it’s basically a NAS as well since you don’t own the SATA controller.

Oh, and so far, I’ve bought normal desktop drives, except for the 8TB one where I specifically bought a “Seagate IronWolf 8TB NAS” (ST8000VN0022) - it was not much more expensive and it seems logical to buy a drive specifically for that purpose. The drives generally fill up before they start failing, but I do install a SMART monitoring tool and act on warnings the hard drives give me. Ideally, I’d have the main storage on RAID-1 so I could recover more easily from failures, but with redundant copies, I can always restore on a new drive when trouble comes, and uptime is not critical since I’m an amateur.

Long story short: I would love to hear more from people here about the hardware solutions they’re using.

The other two (well, 3 and 2) I do. But that one, I am not sure I understand. If you have reliable medium (e.g. RAID-1) and multiple copies, why do you need different media? I know you said you relaxed the requirement here:

But what’s the idea behind that principle anyways?

Different media has different lifespans and can survive different things. If I was super worried about floods, one of media types would be optical disk, like DVD or blue ray. Those can get wet and still function properly. Others have a longer lifespan, like tape backup disks.

I also think you should have cold backups, ones that are not always connected to machine and powered on. Even if you have the most badass raid setup, that doesn’t stop someone from hacking that system deleting everything, or your house from flooding/burning down

2 Likes

I wouldn’t bet my life on a muddy DVD, but point taken. :slight_smile: I would assume offsite backups take care of that, but I guess that only works if they are in a different city/country, which raises the costs for me significantly (I just physically carried that 8TB drive to the other location, it would have taken months to transfer all that data out on this slow uplink).

That’s a good point too! I’ve been looking at how to make append-only git-annex repositories - joeyh implemented it, but my experience with it is hit and miss so far. Borg backups are even worse, unfortunately…

Problem with “cold” backups is they are inherently out of date if you don’t warm them up once in a while. That might mean a manual process, which might mean you will never do it. It’s how I did my offsites before, and I could spend months without doing them. I looked at the trade offs again and figured I would better risk an attacker wiping everything than losing months of work in a (more likely?) local crash.

ah well that’s why you have offsites though. :slight_smile: Fires and floods don’t (usually!) follow your backup trail to destroy all copies of your files… :wink:

My offsite backup custodian is my son. He lives about 3 miles from us, and eats meals here regularly. I’ll occasionally tell him to bring the drive with him to dinner, and we’ll plug it in and let rsync run while we eat. The initial rsync took a day and a half; I think a few months worth takes about one dinner’s duration… :smile:

Oh, and he likes having access to the family photos, so he has motivation to initiate the above…

Indeed, manual management of a backup takes some attention and effort. Just have to find the scheme that works. The risk I figure I’m mitigating here is the risk of a structure fire. That here in Colorado could be due to forest fires as well as things we do around the house, and his location away from the woods (and us) mitigates both.

The risk I’m vexed by is the Yellowstone caldera, which is supposed to consume the USA’s Intermountain West at any moment now… :scream:

I have several drives that have the git annex repos fully encrypted. I don’t hook them up to a system for more than a few minutes to sync changes, then I rotate that drive off site. One of the offsite drives is at my gf’s work (since I work from home now), and the other is at a friends that is ~60 miles away. I don’t rotate the latter as often as the former, but at least some of my files will be saved if there is an absolute disaster.

First of all, no NAS for me. But in these last few days I’ve tested a setup to keep safe my photo library. For ‘safe’ I mean it’s better than a simple backup to an external USB drive using rsync (which is what I was doing before that – I bet it’s also the standard practice for 90% of the photographers out there only they will be using Timemachine on their Macbooks).

By the way, I was reading the brilliant post by @pittendrigh; Colin – I like your approach, it could’ve saved me literally months of hard work during the Great Migrations of 2014-5 (Aperture–>Lightroom) and 2018 (Lightroom–>Darktable). But anway, I’ve been using LR-style apps to process and organize my photo library for years now so that’s my way, I just trust that Darktable being opern source will be my trusty companion for the future.

Back to my backup strategy. I normally work on a laptop (Dell XPS 15) connected to external monitors and all the library sits on an external USB hard drive; it’s around 850 Gb of data, approx ~45k photos. I also have a spare pc at home which I use as media player (Linux with Kodi installed). I’ve fitted an additional internal 2Tb hard drive in the pc and then installed Syncthing on both the laptop and this pc. I sync the .config/darktable/, .cache/darktable/ from the internal drive of my laptop and the directory PHOTOS/ from the external usb drive. These directories are set as send only in Syncthing prefs on my laptop and receive only on the pc box.

I’m pretty happy so far, I can work the usual way on my laptop and then I can turn on the pc and it will silently backup everything new I’ve done. Versioning is also activated on both ends of Syncthing, I keep 5 copies of each modified file (and yes, this will soon make me hit the limit of my hard drive).

Next step is to buy two 4Tb drive (around 100 euros each) and put them in a raid enclosure, and use this enclosure to do a routine rsync backup. Even if I’m not too diligent about this step I’ll still have Syncthing running in the background with minimal input from me.

How does a “RAID enclosure” differ from a “NAS” for you? :slight_smile: I know how they differ at the technical level, but the more I look at RAID/HDD enclosure/racks, the more I’m thinking they really look like NAS devices that connect through USB instead of Ethernet…

They’re mostly the same: a bunch of disks. A NAS will have its own CPU, ram, etc and often offers multiple services: CIFS, NFS, FTP, etc. A USB disk is a USB disk :stuck_out_tongue:

Well that’s why I asked: the word used was not “USB disk”, it was “RAID enclosure”, and that’s in a very different league than “just” a “SATA/USB” adapter (which is what an enclosure usually is). A RAID enclosure has way more “brains” than just a single USB disk. I would particularly be concerned about the filesystem and RAID layout, for example: what happens when things fail and you need to take the disk out?

(There’s also the argument that a USB disk has it’s own CPU, RAM and could technically offer multiple services as well. It’s just we don’t usually think of it that way.)

Maybe I used the wrong term, I’m not sure. I thought that a NAS is something that lives on its own, with its little cpu and OS. The USB RAID enclosure that I bought is simply a glorified USB enclosure that allows me to connect two hard drives and use them either indipendetly (so the computer to which it’s attached sees 2 different drives) or as a RAID array (the thing I’m planning to do, two 4Tb drives giving me a total of 4Tb space but if one fails I have the other).

I think that this particular USB RAID enclosure is not very clever (I’m reading Alastair Reynolds’ Poseidon Children’s trilogy where machines become very clever indeed so I’m a bit worried to have another little brain waiting to prey on me). It may have a cpu, I don’t know, but it certainly needs to be connected to a computer to work.

The pertinent difference is “network”. For what it’s worth, I think the network transport of NAS adds a layer of complexity that may not be worth the trouble, depending on how you access these things.

There’s some sort of microprocessor in any of these devices, but it’s probably not useful to you for anything other than what the manufacturer programmed it to do. Edit: Oh, unless it’s a regular computer, then you can do whatever… :smile:

Meant to comment on this when I read it, got sidetracked… Be careful doing this. If you use two identical (make/model) drives purchased from the same place, you have a good chance of exposing yourself to subsequent failures of both, before you can deal with the first failure. This is a thing, I’ve seen it happen, and the worst case is complete loss of your data. Two different make/model drives of the same capacity will provide separation, probably enough to allow MTTR (Mean Time To Repair)…

3 Likes

You can actually buy hot swap bays for 3.5" drives that fit in a 5.25" drive bay giving you easy access with normal cases.

Thanks for the tip Glenn! I didn’t think about this. I was going to buy two Seagates, they are a bit faster than the WD (5900rpm), but I’ll drop one Seagate and get a WD (will they work fine together if they have slightly different speeds?).

Yes they’ll work fine a different speeds.

1 Like

Can anyone recommend a thorough guide to configuring and mounting a NAS correctly? Or does it vary too much depending on the unit? I have no idea — I’ve never used much less owned one.

I ask because every once in a while a Rapid Photo Downloader user files a bug report saying there is a problem with Rapid Photo Downloader, but in reality it’s caused by their poor NAS setup. Problems I’ve seen include:

  1. The NAS not being able to write files quickly enough, causing problems when rapidly copying files
  2. The NAS not reporting its size correctly (bytes free / total)
  3. The user mounting it with Gnome GIO instead of a proper mount command — which probably works okay for Gnome applications that use GIO, but functions very poorly (if at all) with standard Linux system calls to copy files.

It would be nice to point users to documentation that could get them up and running quickly. Thanks in advance.

Not at all! :slight_smile: You were correct, I just wanted to know what the trade-offs were in your case… pun intended :wink: Sorry for being nerdy with the terms…

My concern with those machines is that the USB socket becomes a bottleneck. Basic USB 3.0 has speed that is basically the same (5Gbit/s) as a single SATA port (6Gbit/s). This implies that if you have more than one drive, the SATA bus can saturate the USB bus fairly easily, depending on the speed of the storage of course. And especially when you run the RAID controller on the computer, you will be swapping bytes back and forth over that USB bus quite a bit, which might slow down RAID resyncs and stuff like that.

That’s why I brought up the parallel between NAS machines and RAID enclosures: they are closer than what we might usually think. As @ggbutcher said, the different is the “N” part, the network. But I disagree with @ggbutcher on the complexity: i think it’s worth it, as it allows you to access the storage from multiple devices as well.

You can only fit a single 3.5" drive in there though. But sure, that’s definitely an option. In fact, my ideal case would be a rack of multiple 3.5" drives that would fit in multiple 5.25" bays. For example, you could fit 4 x 3.5" drives in 3 x 5.25" bays, if my calculations are correct. If those are connected directly through the SATA controller and support hotswap, that’s a nice home-made NAS right there…

And of course, that’s how you build servers, except the cases are in the 1U form factor, instead of the typical “tower” you see for desktops. That’s something to keep in mind too: there’s a huge market of second-hand servers with pretty reliable hotswap bays and SATA controllers out there. If you have the space and can afford the isolate those things away (because of the awful noise they often make), they can be great home servers as well.

I’ve so far managed to keep those work-related horrors out of the house, but I’ve thought about them since I started moving machines to the basement. :wink:

That’s unfortunately a very broad question: NAS devices vary a lot depending on manufacturers, when they’re not completely home built. Your results will vary according to the network protocol used (NFS, SSH, Samba, etc) and the operating systems running on both ends…

I would love to see a good forum post here explaining a “state of the art” setup, however. The trick is which hardware and which network protocol to choose, and that will vary enormously according to your budget, client OS, ethical considerations and philosophy. :wink:

But regarding your specific questions about usability issues, I would suggest to move your post to the RPD forum instead of the broader Hardware section…

Phew, sorry for the long post, I hope it helps!

1 Like

Thank you all for sharing your knowledge and expertise.

I’m in a state in which I’m considering a NAS or anything that will prevent data loss. I still use single USB drive as a backup and keep the SD cards. I know it’s not a good practice :wink: