Enabling Defragmentation on APFS Hard Drives
You can try this yourself, although the documentation of defragmentation is minimal. The
diskutil apfs
command allows you to enable and disable defragmentation at Container or Volume level, with a command such asdiskutil apfs defragment volumeDevice enable
enabling it on the specified volume. But I wouldn’t expect it to have any significant impact on the poor performance that you’re experiencing on APFS-formatted hard drives. I hate to say it, but if you possibly can, it would make good sense to keep those in HFS+ format until you replace them with SSDs.
Previously:
Update (2023-08-31): Felix Schwarz:
Three days ago, I noticed an external HDD’s write performance had deteriorated to about 2-5 MB/s, making me wonder if it was about to fail - or if it is an APFS performance issue.
I subsequently learned about APFS defragmentation and enabled it for the HDD’s APFS container via
diskutil apfs defragment [disk] enableComing back to the disk two days later, write performance is back to normal.
6 Comments RSS · Twitter
See, @Howard and I are on the same page! I do not see a true advantage to switching spinning platters over to APFS. Lot of downsides to making that switch.
To reiterate, SSDs? No brainer.
Got to go with the flow there. The file system seems specifically designed with SSD characteristics in mind. Coupled to better volume management and likely better data safety, no problem with APFS on SSD.
So Apple doesn't actually use the defragmentation feature? Did they ship it knowing that it wasn't really helping, but also not hurting (and have dropped the effort for being a useless design?)? Are they working on improving it?
Stay tuned for some or none of these answers by 10.19 Death Valley.
Got to go with the flow there. The file system seems specifically designed with SSD characteristics in mind. Coupled to better volume management and likely better data safety, no problem with APFS on SSD.
Definitely.
(Though it seems even on SSDs, APFS isn’t quite as fast as HFS+. Maybe they’ll tweak that over time?)
But they have to come up with a backup story at some point, even if it is “you know what, just use btrfs on an HDD”. (That’s what I do with Synology and Time Machine. Seems fine?)
Is there any way to explicitly run a full defrag?
I'm trying to lower the high watermark on some APFS-formatted sparsebundles, hoping I could then `hdiutil compact` them
> Is there any way to explicitly run a full defrag?
>
> I'm trying to lower the high watermark on some APFS-formatted sparsebundles, hoping I could then `hdiutil compact` them
It appears that there is: https://apple.stackexchange.com/a/372168/46665
I did an experiment on a macOS VM which had been plagued by the disk image constantly ballooning and increasing in size. First, I ensured that the QEMU VM was running with: "discard=unmap", "detect_zeroes=unmap", and "rotation_rate=1". For example, this LibVirt domain XML:
```xml
```
This forced the qcow2 disk image to unmap any zero writes (try to keep the backing image sparse) and to trick the VM into thinking that the disk was an SSD ("rotation_rate=1" is the special value for SSD).
Then, in the macOS VM, I enabled SSD TRIM via: `sudo trimforce enable`
Thus, any TRIM/discard operations in the VM get mapped to "unmap" on the QCOW2 image storage layer.
Next, defrag was enabled on the APFS container for "/System/Volumes/Data" ("/dev/disk1s1" in my case):
sudo diskutil apfs defragment /dev/disk1s1 enable
While Apple fails to fully and properly document the internals of "apfsd", we can glean some information from the system log. To follow the live logs we can use "log stream" and filter on the "apfsd" process in a separate terminal:
log stream --predicate 'process == "apfsd"'
Finally, to force the automatic defrag routines in apfsd, we can force disk access to every file on the "Data" volume (`find -x` to stay on the "Data" volume only):
sudo find -x /System/Volumes/Data -type f -exec /bin/sh -c "head '{}' 2>&1 1>/dev/null" \; 2>&1 1>/dev/null
Watching the "apfsd" logs, we then see many mentions of the APFS volume identifier: "com.apple.apfs.defrag.disk1s1"
Unfortunately, Apple fails to fully document what apfsd is doing internally. The apfsd man page is quite "sparse" (pun intended).
So, we can only assume that "apfsd" is defragmenting files as they are accessed based on mention of defrag routines and the "disk1s1" which we enabled for the same APFS volume earlier. While this was ongoing, the QCOW2 disk image still kept its original size. In order to shrink it back down, it appears that another tool must be run. Often, one re-sparsifying trick is to write a file full of zeroes to fill up the disk with zeroes (thus triggering the "detect_zeroes=unmap" behavior), and immediately removing it:
sudo dd if=/dev/zero bs=4096 of=${HOME}/ZERO_DISK ; sudo rm ${HOME}/ZERO_DISK
For QEMU images, there are also these tools: `virt-sparsify`, or `qemu-img convert`. With `qemu-img convert`, the image can also be re-compressed to slim it down further.
For example:
qemu-img convert -p -f qcow2 -O qcow2 -c -o compression_type=zstd /var/lib/libvirt/images/macos-12-1_ORIGINAL.img /var/lib/libvirt/images/macos-12-1_REPACKED.img
I assume for an APFS sparsebundle file, there may be some diskutil command that could re-sparsify it?