iDefrag and iPartition Discontinued
Apple, for whatever reason, elected to release its new filesystem — and convert existing machines over to using it — without first publishing the filesystem specification so that utility vendors like us could update our software. Four months after the release of macOS High Sierra, it still hasn’t published the necessary information, and while without seeing the details it’s hard to speculate on how much work it would be to support APFS in our utilities, it’s a good bet that it’s more than six months’ work. In the meantime, in spite of the messages we’ve put on our website, customers continue to purchase the products, realise they don’t work for them, then ask for refunds (or, worse, file chargebacks through their respective banks); this actually costs us money, and also results in a string of less than satisfied customers. We don’t want that, and you, our customers, don’t want that either.
There have also been changes in recent versions of the macOS to tighten up security, which is definitely a good thing for end users, but makes it very awkward to make utility software function in a reasonable manner.
This is a shame because HFS+ fragmentation will likely be with us on spinning disks for a long time. I’m also not convinced that APFS fragmentation is a non-issue on SSDs. APFS apparently includes automatic defragmentation, but there’s little information about when and how that works. Does it do anything for large files when the disk is nearly full (as APFS volumes often will be due to snapshots)? My brand new Lightroom database is already fragmented into 10,833 pieces. Does that overhead really not matter? This is for a 1.5 GB file that’s stored on an SSD with (according to Finder) 363 GB available.
See also: Aura (Hacker News).
Previously: SuperDuper and APFS.
7 Comments RSS · Twitter
No, filesystem fragmentation does not matter on SSD's. What, you think the SSD was laying out your "contiguous" files on contiguous pages of the SSD, or even "raw" blocks 1, 2, 3, 4, etc? Hardly.
@Joshua Storing them that way would be less efficient, so of course it doesn’t do that. The point is that SSDs are not magic. Not all block layouts for a file are equally efficient. My hypothesis is that fragmentation decreases the odds of an optimal layout. My guess is that fragmentation would make fetching from the SSD less efficient because it needs to read more blocks in order to find out the next chunks to start fetching. And said chunks are less likely to be (logically) contiguous on the SSD, so since it doesn’t read blocks individually it might have to make extra read requests and waste time reading blocks that are actually for other files.
There’s also got to be some filesystem overhead in tracking additional chunks.
"My hypothesis is that (SSD) fragmentation decreases the odds of an optimal layout."
Surely, someone over the years must have published some research on whether or not SSD fragmentation has a real-world effect, and if so, to what degree, no?
@Chucky APFS was designed for SSDs and has (some level of) defragmentation built-in, so I think that shows that Apple thinks it has a real-world effect. [Update: Looks like APFS only has defragmentation for hard drives. This may be because of concerns about excessive wear. Here are some links that suggest fragmentation does matter.]
Windows also sometimes defragments SSDs:
"Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance."
http://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx
@Peter I had some code written that does this to detect index fragmentation in EagleFiler. Basically, it calls fcntl()
with F_LOG2PHYS_EXT
and walks down the file counting how many contiguous chunks there are.