Monday, January 10, 2022

Has Time Machine Slowed for Small Files?

Howard Oakley:

The only common factor is that, when trying to back up folders containing seriously large numbers of very small files, some of which may be hard links, the rate of copying falls to ridiculously low numbers.


Looking back before 10.15.3, Time Machine never seemed to have problems with copying Xcode, or with the .DocumentRevisions-V100 folder. Exclude those, and anything like them, from backups now, and it performs well, even to a NAS via SMB.


Monterey introduced a new hidden feature in Time Machine: before making its first backup to a new backup set, backupd runs a speed test.


It’s unclear what Time Machine does with those results, or why it should perform the second test using many small files, unless Apple knows there’s a problem, perhaps.

I’ve been seeing this problem, too, except that it’s also triggered by small files that I do want to back up. I had to restructure my folders to prevent it from grinding away 24/7.


Update (2022-01-25): Howard Oakley:

There’s another part to this, at least in M1 Macs, in that backupd threads must also be given sufficient % CPU to be able to take advantage of any release of that throttle. Having demonstrated how user threads can make best use of the Efficiency (E) cores in the M1 Pro, my next step was to inspect what happens during a backup using powermetrics and the CPU History window. Here I was surprised to see that, while backupd accounted for around 90% active residency on each of the two E cores during a backup, those cores were largely running at 972-1332 MHz, around half their maximum frequency.

By default, then, Time Machine backups are run exclusively on the E cores, at economy mode to minimise power consumption, with an I/O throttle preventing them from accessing storage at normal speed. These limit it to backing up no more than 300-400 items/s, which in turn means that folders containing very large numbers of items will take a long time to back up.

Sadly, Apple doesn’t provide any options for the user to accelerate a backup, nor does backupd change its settings when it knows that there are a great many items to be copied.

5 Comments RSS · Twitter

A purported backup system that needs to have users choose what files to not have backed up in order for the backup software to 'complete' is an abject failure of software design and implementation.


Time Machine has always been kind of atrocious. I'd argue the fact that every now and then I have to completely format my backup disk and re-setup Time Machine on it since I run into the infamous "preparing to backup" bug also makes it somewhat useless.

> It’s unclear what Time Machine does with those results, or why it should perform the second test using many small files, unless Apple knows there’s a problem, perhaps.

Why can't the answer be "backupd performs two benachmarks to compare throughput with large files and small files; if throughput of small files is significantly slower, it uses compression/archival to combine multiple files into one during transmit"?

(Or, half a step before that: Apple is collecting telemetry on how much slower small files are on customer's machines, and based on that will decide to add such a mechanism.)


I'm curious if you are still seeing the need to format with Time Machine backups written to the new APFS formatting. Certainly backup written to the old HFS+ formatted media were susceptible to the file systems habit of corrupting itself.

@liam: I was hoping this would be gone with the move to APFS, but I've already had the dreaded "your backups cannot be reliably restored" error in Big Sur and Monterey. So, apparently not.

Leave a Comment