I'm looking at grabbing a couple of 1TB disks and putting them under
raid 1 for storage. Of course there will be actual backups as well,
probably to a separate 2TB disk for a daily/weekly 'snapshot' with
checksums via mtree or such, anything uber important will be on a
removable disk as well. I'm mostly concerned with not winding up with
backups of corrupt data. The box will be something with ECC ram,
Lenovo TS140 is looking good at the moment.
I'd probably just throw fbsd + zfs at it but fbsd scares the hell out
of me especially for _my_ data, and especially since I *do* intend to
occasionally access it remotely via VPN. Last time I tried using fbsd
for anything I wound up with total hosage via portmaster or something,
plus the mmap/ptrace thing, screwing up openssh lately... I'd just
much rather use open. I've had exactly zero problems ever with
softraid's crypto and nothing compares to pf.
Q1: TLER, does it matter for softraid? I assume yes and have no
problem paying a few extra bucks for more suitable drives, but
assumptions always cause problems. I can't seem to find an answer on
this via man or google.
Q2: Is there a benefit to putting 3 drives under raid-1, beyond some
read speed and I presume less risk of another disk failing during a
Q3: Scrubbing. It seems it isn't there, at least not explicitly in the
manual. Will the nightly/weekly copy be sufficient or should I just
use a script to occasionally compare checksums of the more important
bits since I'll have them anyway?
Q4: Should I just piss on it and use dump or rsync + mtree? I'm not at
all concerned with speed, ISP's the bottleneck there. I'm only
thinking RAID to give the system a chance to notice there's a
discrepancy when whatever it is first gets written or at least when
it's read, and having a copy newer than the last backup if possible
when a disk fails especially if I'm not around at the time. I'm pretty
sure a hard drive's entire purpose in life is to fail spectacularly,
dragging as much data as they can with them to the bit bucket.