The Future Is Disposable
One of the most fascinating aspects of this transition is a return to the days where our devices don’t matter. As not only our data, but our applications and settings migrate and synchronize across the cloud, we are no longer tied to the anchors sitting on our desks or carried in our bags. While we aren’t fully there yet, we’re close to being able to move from device to device and maintain the functionality and familiarity we need.
I had a similar feeling of invincibility until I discovered this morning (through a failed EagleFiler verification) that one of my files was damaged. I logged into CrashPlan to retrieve an older version from the cloud, but it was unable to search for or restore any of my files:
Unable to restore due to a backup archive I/O error.
I’ll update this post when I hear back from CrashPlan technical support. I had recently stopped using Time Machine because, through various drives and enclosures, it was failing with the error:
Error: (-50) Creating directory Backups.backupdb
I had also briefly tried backing up to a Time Capsule before finding that, even after the initial slow backup had completed over Ethernet, it was causing various applications to lock up for 20 minutes at a time while mds was apparently indexing the backup (even though it was excluded from Spotlight).
So the backup situation is not as solid as I thought, although I can probably find a reasonably recent version of that file on an off-site SuperDuper clone.
I’ve just starting experimenting with Arq to back up my most important files to Amazon S3. I don’t want to pay what it would cost for a full backup this way, though.
I’m also considering using CrashPlan with a directly connected hard drive as a Time Machine alternative for storing versioned files.
Update (8:20 PM): I have not yet heard back from CrashPlan support, but I tried the restore again and was able to browse and search all my files. I was able to find the file in question and restore a version of it from a few weeks ago that had the proper checksum.
Update (2011-06-27): CrashPlan technical support recommended clearing my CrashPlan cache data by clicking the logo and entering the command “backup.replace 42”. So apparently they don’t think there was any problem in the data center.
Update (2011-06-30): Amazon has announced that they are no longer charging for inbound data transfer, which should make S3 more affordable for backups.
4 Comments RSS · Twitter
I use the F/OSS duplicity command for my daily off-site backups. Running via launchd and installed from MacPorts: sudo port install duplicity.
It does encrypted incremental backups using ssh/scp, sftp, S3, WebDAV, etc.
It doesn’t come with a UI (AFAIK) so it’s not as friendly as some of the commercial solutions, but I prefer the transparency and added control/flexibility.
@Allan Sorry your comment got stuck in moderation limbo. I don’t think I’d heard of duplicity before; thanks.
"Sorry your comment got stuck in moderation limbo."
The first rule of blog commenting is to limit yourself to one link per post to avoid moderation limbo...
"Amazon has announced that they are no longer charging for inbound data transfer, which should make S3 more affordable for backups."
Wow. I'm really glad I bothered to post the above snarky comment, otherwise I'd have missed this news.
Time to re-evaluate the economics of Arq.
(And time to figure out if there is any easy way on OS X to just create my normal encrypted backup clones via CCC and upload those to S3 as a sane off-site workflow...)