Tuesday, March 4, 2014

Arq 4.0

Stefan Reitshamer:

I’m really excited about this release! It’s got features that many people have been asking for, and it opens Arq up to a whole new range of options for storing backup data.

Glacier backups now use the S3 Glacier Lifecycle feature. Among other benefits, this allows Arq to prune old Glacier commits (that previously were immortal) and subject them to the budget. Unfortunately, Glacier vaults from previous versions of Arq cannot be transitioned; you have to delete them and create a new backup target (not in that order!).

You can now back up to other S3-compatible destinations such as as DreamObjects, which is about half the price of Amazon S3 and has fewer restrictions than the (even cheaper) Amazon Glacier. I plan to continue using Glacier and S3 because the performance has been great and (in theory, see below) the reliability is unmatched. But it’s nice to have alternative services to switch to or use in parallel.

Arq now supports backups via SFTP, which is something I’ve wanted a backup app to do for as long as I can remember. I have an account with DreamHost, and they offer 50 GB of SFTP space for personal backups. This is a convenient, free space I can use for my most important backups. It avoids the delays and expense of restoring from Glacier. DreamHost Personal Backup is great as a secondary backup target, but it is not itself backed up so you should still use AWS or another service for your primary.

You can also use SFTP to make a local backup or archive on a NAS or other Mac that you have an account on.

Aside from the new storage options, the other big new feature is that you can now have multiple backup targets. This lets you have multiple backups going to different cloud services. You can also spread your files across multiple targets, e.g. if you want your Documents folder to have a different backup schedule than your Aperture or iTunes library. Each target can also have a separate budget, which lets you keep a longer history for certain folders. You can also pause a backup target (by setting its schedule to manual) in order to give priority to other targets (since Arq seems to only back up to one target at a time). Alas, the targets cannot be renamed or reordered, and you cannot copy file exclusion patterns from one target to another.

I’ve been seriously using Arq since version 2, and version 3 was one of my favorite apps. Version 4 so far seems to be better still. The app itself has been reliable (rarely crashing) and has not hogged the CPU (like other backup apps I’ve tried). However, I have had some problems with the reliability of Arq’s backups. It’s not clear whether this is due to a bug in Arq itself or problems with the cloud storage provider (AWS).

Twice in the last six months, I’ve found that backup snapshots (“commits”) older than a certain date had disappeared. Arq stores the commits in a linked list. If a commit object is lost, Arq, naturally, will no longer be able to find the trees and blobs in that commit. But it will also lose the link to the parent commit (previous backup snapshot) and, thus, all of the previous snapshots. In theory, much of the data is still on the server, but it’s no longer in an accessible form, and Arq will garbage collect it when it enforces the budget.

The developer, of course, takes this sort of thing very seriously. The first time I noticed missing backup snapshots, he told me that several other customers had reported the same problem around the same time. It seemed as though the problem was that Amazon S3 was reporting objects as missing (when doing the equivalent of an ls) even though it could successfully fetch their data when asked (the equivalent of stat or cat). So when Arq periodically verified its backups, it would delete objects related to the “missing” ones unnecessarily. An update to Arq was soon released to fix this.

At the time, I was using S3 Reduced Redundancy Storage for my backups. RRS storage is cheaper than regular S3 but offers only 99.99% durability compared with 99.999999999%. Since I have other backups besides Arq, I did not think I needed to pay for those extra 9’s. I thought it was acceptable to lose 1 in 10,000 objects, even though I have many more files than that. What I failed to appreciate was that the lost object might not be a file. It could instead be a commit object. In that case, losing that one object effectively means losing hundreds of thousands or even millions of other objects. These days, I think there is little reason to use RRS with Arq. You can store your backup data in Glacier, which is much cheaper than RRS yet has the same durability as S3. The backup metadata is stored in S3.

It’s not clear whether RRS was at fault, but I switched away from it just to be safe. Then, a few months later, I noticed that more old backups had disappeared. This time, other Arq users had apparently not encountered the same problem. It’s hard to know, though, because it is not obvious in the user interface that backups have been lost. You only notice it when you click a disclosure triangle to see the list of snapshots and see that the list is shorter than expected.

I never actually lost any current backups, but I was intending to use Arq as a historical archive as well, because sometimes I need access to old versions of files. In that sense, the cloud backup is much more than a backup; I do not have master local copies of all the versions.

It’s obviously very troubling to have a backup app or cloud storage provider lose my backups. But I continue to use and recommend Arq for several reasons. First, I have confidence in the product’s basic design and in Stefan, its developer. Second, Arq 4’s support for multiple backup targets offers a variety of ways to mitigate the problems caused by lost objects. Third, I have tried just about every backup product I could find over the years, and I have yet to find one that’s better. The closer I look, the more flaws and design limitations become visible. For example, Backblaze is highly regarded, yet it silently deletes backups of external drives that haven’t been connected in a while.

Backups are important enough that I make local ones (using SuperDuper and DropDMG) even though that’s more work than just relying on the cloud. I want to have copies of my data in my physical possession. There are also obvious benefits to making cloud backups, e.g. using Arq, so I do that as well. What I have more recently come to realize is that cloud backups are important enough that I shouldn’t rely on just one provider. Before Arq I used CrashPlan, and it, too, occasionally lost my data. The lesson here is that there is no perfect cloud provider. I should plan for failure and use multiple good providers. I am now using CrashPlan alongside Arq.

The second lesson I’m learning is that I value access to old versions of files but that there are few, if any, backup products that can provide this over the long term. The answer, I believe, is to structure the data so that the backup, rather than the backup history, contains the old versions. In other words, put the versions in band, where possible. For example, a single backup snapshot of a Git repository includes the complete, checksummed history for those files. I don’t need last year’s backup if I committed the file to Git last year and I have yesterday’s backup. Of course, my source code has been in version control from the beginning. But I am now using version control to track other types of files such as notes, recipes, my 1Password database, and my calendar and address book. This lets a newly created cloud backup contain versions from years ago.

The same logic holds for verifying the backup. It’s nice if the backup software can do this, but if your data has in band checksums you can verify the restored files independently. You can also verify your working files so that you can identify damage and know when you need to restore a clean copy from backup. You can verify files in Git using git-fsck. For files not in Git, I use EagleFiler and IntegrityChecker.

3 Comments RSS · Twitter


Backblaze now deletes backups of unplugged external drives a bit less silently: they start sending warning emails after the drive has been unplugged for 14 days.


Manton Reece: “Still bummed that Backblaze deleted my backups after I didn’t notice that my credit card had expired. Technically my fault, but now there’s no chance I’ll ever use or recommend the service.”


[...] not surprise me if the files are still there; Arq just isn’t seeing them. In any event, my strategy is to have multiple cloud backups—Arq and CrashPlan (which has been working very well [...]

Leave a Comment