Archive for September 15, 2017

Friday, September 15, 2017

SuperDuper and APFS

Dave Nanian:

The bad news is I’m not confident enough to say we’re going to release our APFS support day-and-date.

I know this kind of hedging is disappointing. But it’s important to note that Apple still hasn’t released any documentation on the “proper” way to create a bootable APFS volume. An example of what they have in mind was released for the very first time when the High Sierra developer release came out a few months ago, but that’s it. We basically have to make an educated guess about what they want.

We’ve designed and implemented that, and it’s significantly different than HFS+’s boot setup, with various special partitions dedicated to specific purposes (even a separate VM volume!), and entire new volume management system, etc.


For example, what happens if you do an “Erase, then copy” from an HFS+ volume to an APFS volume? In our current version, we match the format of the source when we erase. But, HFS+ can’t be in an APFS container. So, we’d have to convert the container to a regular GUID partition. And since there might be other APFS volumes in that container, you’d end up destroying them.


In particular, Apple has further tightened its System Integrity Protection process, and is completely denying access to some files on the startup volume, even when copying to a non-startup volume.


APFS doesn’t seem to be faster than HFS+ (which is not to say it won’t ever be, or that it won’t be more stable...a low bar, I know).

Mike Bombich:

Apple offers a couple helpful APFS-related knowledgebase articles here:

Apple Kbase HT208018: Prepare for APFS in macOS High Sierra
Apple Kbase HT208020: Upgrade macOS on a Mac at your institution

In regard to how CCC will work with your APFS-formatted volumes, this CCC knowledgebase article aims to answer all of the questions you might have on the subject:

Everything you need to know about Carbon Copy Cloner and APFS

Previously: macOS 10.13 High Sierra Shipping Soon, Pondering the Conversion From HFS+ to APFS.

Update (2017-09-20): Alastair Houghton:

Now, in the case of macOS 10.13, there is a bigger problem. Apple is changing filesystem. In order for a low-level disk utility like iDefrag or iPartition to function, we need to know exactly how the filesystem organises data on disk; indeed, inside our products we have pretty comprehensive implementations of HFS+, FAT and NTFS. Apple’s new filesystem APFS, is a completely new design, and you’d have thought that Apple would give us disk utility vendors a fighting chance of getting up to speed before the release of 10.13 by releasing design documentation well in advance, but no, that hasn’t been the case this time around. The only documentation we have about the APFS volume format is this table. Yes, that document includes other information about what APFS can do, but it doesn’t include any detail of the on-disk data format other than a table comparing it to HFS+.

While it’s impossible to be certain, it’s highly likely that adding APFS support to our products, if/when Apple ever releases technical details of its volume format, will involve months of work, and since APFS is going to be the default format for many devices (specifically, anything that uses only Flash storage), as well as being an option for other situations, we simply can’t promise macOS 10.13 support right now.

Update (2017-09-25): Dave Nanian:

We’ve finished up a bunch of internal testing over the past few weeks, and there’s a beta of SuperDuper! for High Sierra and APFS linked at the bottom of this post. But it’s so exciting, in a totally nerdy way, that it would be a mistake to not follow the whole story, with its twists and turns. So let’s dive in.

Update (2017-10-02): Dave Nanian:

Interesting tidbit for the curious: if you turn on encryption, while APFS is converting the drive, snapshots cannot be created.

That means that, during the encryption process, neither Time Machine nor SuperDuper can back up. So, be aware and back up first.

Update (2017-10-17): Dave Nanian:

The introduction of APFS allowed us to revisit that decision. Because its more flexible volume creation is low-impact, the risks inherent in adding and managing the Recovery volume itself are minimal. Recovery now has its own special, documented “Role” within the APFS container, and its contents follow the pattern established for Preboot. Even encryption is done differently: it's properly managed in Preboot, which can be created and updated by a documented system tool, provided by Apple, further ensuring proper operation and compatibility as Apple makes changes and modifies requirements.

After carefully evaluating the new support and determining there were minimal risks, we decided that we could safely copy and manage Recovery for APFS containers, whether copied from APFS or HFS+ sources. And so we do.

Update (2017-11-01): Dave Nanian:

We’ve discovered during our broader Beta rollout that, due to weird bugs in Disk Utility, formatting an HFS+ drive as APFS is unreliable too. Sometimes the drive just “vanishes” and doesn’t re-mount. Sometimes it fails for no reason. Sometimes it makes the one volume unreadable until it’s erased again.


The problem is, log show -last 1m, including a kernel predicate so it only returns low-level kernel logging, can be slow. Not only that, but due to bugs in the logging subsystem, it can incorrectly return much more than one minute of logging. We've seen it return almost a gigabyte of log data!

Update (2017-11-09): Dave Nanian:

SuperDuper! 3.0 has, literally, many hundreds of changes under the hood to support APFS, High Sierra and all version of macOS from 10.9 to the the present.

SuperDuper! 3.0 is the first bootable backup application to support snapshot copying on APFS, which provides an incredible extra level of safety, security and accuracy when backing up. It’s super cool, entirely supported (after all, it’s what Time Machine uses...and it was first overall), and totally transparent to the user.

Kernel Extensions in High Sierra

Felix Schwarz:

Apple has softened its tone regarding #Kext blocking in #HighSierra:

  • No more stop signs
  • “User-Approved” instead of “Secure”. Progress!

Felix Schwarz:

Fun fact: if the Security & Privacy prefs pane is already open while installing a new #kext, no “Allow” text or button is shown.

Felix Schwarz:

Fun fact 2: other than what the TN suggests, #kexts installed together, but in different locations, are approved together. Sometimes. 🙃

Felix Schwarz:

Fun fact 3: This is what happens when you try to “Allow” a #Kext using Screen Sharing: nothing. Remote admins will “love” this.

He’s filed a bug that goes into detail about some of the user experience issues and how it would be better if Apple provided an API for apps to request approval or had a review process for Apple-signed extensions to install without approval:

The “System Extension Blocked” alert gives the average user the impression that an app tried to do something fishy or dangerous and was stopped by the operating system. Or - even worse - that this is a trick alert brought up by the app that tries to trick users into opening System Preferences and removing safeguards there.


In its current state Secure Kernel Extension Loading in macOS 10.13 does not provide a good experience for either users or developers. In fact, if this feature ships as it is now, shipping a kext becomes a risk for the reputation of legitimate developers due to the optics of this feature's implementation.

Previously: Little Snitch 4 Public Beta.

Update (2018-08-14): Thomas Reed:

So many of the problems with kext restrictions in High Sierra fall on the developer. Allow button doesn’t respond, or doesn’t appear? Kext left behind in StagedExtensions? It’s seen as the dev’s fault. 😒 We’re doing Apple’s tech support.

Update (2018-08-30): Felix Schwarz:

#Mojave’s #kext approval prompt added a much needed “Open Security Preferences” button. Thanks to the engineer who did this! ❤️

It’s a real improvement over High Sierra[…]

Update (2019-03-22): Felix Schwarz:

User Approved Kext Loading after ~ 2 years:

- still has no API to provide a good user experience

- still ignores clicks on “Approve” – and still gives the user no feedback as to why it ignores them.

- still fills my support inbox & kills my sales 😭

Update (2019-08-15): Patrick Wardle:

Apple’s “User-Approved Kext” loading, is a pain for 3rd-party developers, but aims to thwart exactly this type of (real) attack.

New App Store Review Guidelines: Gifts, Face ID, ARKit

Paul Hudson:

No app may market itself as “including content or services that it does not actually offer” – specifically iOS-based virus and malware scanners, which have always been nonsense.


Apps may now allow users to send money to others as a gift on two conditions. First, the gift must be a completely optional choice by the giver, and second 100% of the funds must go to the receiver of the gift.

Previously: Apple Wants 30% of Tips From Chinese Chat Apps.

Update (2017-09-19): See also: App Store Review Guidelines History.

Update (2018-01-16): Juli Clover:

Apple and Tencent, the company that owns the popular WeChat messaging app, have reached a deal that will let WeChat users resume sending in-app tips to content creators, reports The Wall Street Journal.

The Incredible Growth of Python

David Robinson (Hacker News):

In this post, we’ll explore the extraordinary growth of the Python programming language in the last five years, as seen by Stack Overflow traffic within high-income countries. The term “fastest-growing” can be hard to define precisely, but we make the case that Python has a solid claim to being the fastest-growing major programming language.


June 2017 was the first month that Python was the most visited tag on Stack Overflow within high-income nations. This included being the most visited tag within the US and the UK, and in the top 2 in almost all other high income nations (next to either Java or JavaScript). This is especially impressive because in 2012, it was less visited than any of the other 5 languages, and has grown by 2.5-fold in that time.


With a 27% year-over year-growth rate, Python stands alone as a tag that is both large and growing rapidly; the next-largest tag that shows similar growth is R.


Outside of high-income countries Python is still the fastest growing major programming language; it simply started at a lower level and the growth began two years later (in 2014 rather than 2012). In fact, the year-over-year growth rate of Python in non-high-income countries is slightly higher than it is in high-income countries.

David Robinson (Hacker News):

These analyses suggest two conclusions. First, the fastest-growing use of Python is for data science, machine learning and academic research. This is particularly visible in the growth of the pandas package, which is the fastest-growing Python-related tag on the site. As for which industries are using Python, we found that it is more visited in a few industries, such as electronics, manufacturing, software, government, and especially universities. However, Python’s growth is spread pretty evenly across industries. In combination this tells a story of data science and machine learning becoming more common in many types of companies, and Python becoming a common choice for that purpose.

Update (2017-10-12): Jeff Knupp:

The buffer protocol was (and still is) an extremely low-level API for direct manipulation of memory buffers by other libraries. These are buffers created and used by the interpreter to store certain types of data (initially, primarily “array-like” structures where the type and size of data was known ahead of time) in contiguous memory.

The primary motivation for providing such an API is to eliminate the need to copy data when only reading, clarify ownership semantics of the buffer, and to store the data in contiguous memory (even in the case of multi-dimensional data structures), where read access is extremely fast. Those “other libraries” that would make use of the API would almost certainly be written in C and highly performance sensitive. The new protocol meant that if I create a NumPy array of ints, other libraries can directly access the underlying memory buffer rather than requiring indirection or, worse, copying of that data before it can be used.

And now to bring this extended trip down memory lane full-circle, a question: what type of programmer would greatly benefit from fast, zero-copy memory access to large amounts of data?

Why, a Data Scientist of course.

Equifax Breach

Bruce Schneier:

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It’s an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver’s license numbers -- exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it’s happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can’t fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn’t notice, you’re not Equifax’s customer. You’re its product.

Rich Mogull:

Ignoring all that, the real issue is that one of the companies “trusted” with determining our financial future based on deep records of personal information was breached… and due to the current nature of our financial system, we can’t effectively protect ourselves. Our best options offer only limited protection and come at a hefty cost, due in large part to lobbying by the credit rating agencies themselves.


In each of these cases, I was offered some amount of free credit monitoring, just as Equifax has done in this latest breach. However, the free credit monitoring lasts only for a year, yet the bad guys can use my SSN for the rest of my life.


The first step is to make things harder for a criminal to create new accounts in your name. There are two tools to do this, fraud alerts and credit freezes, but only one actually works. You can find information, phone numbers, and links on the U.S. Federal Trade Commission’s Identity Theft Web site:

A fraud alert places a flag on your account for 90 days. During that time a business needs to verify your identity before it can create a new account in your name. There used to be companies that could automatically renew your 90-day alerts for you, but the credit agencies sued them out of existence, which was a travesty. So, if you want an indefinite fraud alert, you need to repeat the process yourself every time it expires.

Update (2017-09-19): Jeffrey Goldberg:

There are many important things to ask about this incident, but what I am focusing on today is why has non-secret information become sensitive? None of those numbers were designed to be used as secrets (including social security numbers and credit card numbers), yet we live in a world in which we have to keep these secret. What is going on here?

Matthew Green:

While many people have criticized Equifax for its failure, I’ve noticed a number of tweets from information security professionals making the opposite case. Specifically, these folks point out that patching is hard. The gist of these points is that you can’t expect a major corporation to rapidly deploy something as complex as a major framework patch across their production systems. The stronger version of this point is that the people who expect fast patch turnaround have obviously never patched a production server.

I don’t dispute this point. It’s absolutely valid. My very simple point in this post is that it doesn’t matter. Excusing Equifax for their slow patching is both irrelevant and wrong. Worse: whatever the context, statements like this will almost certainly be used by Equifax to excuse their actions. This actively makes the world a worse place.

Bloomberg (via Hacker News):

Equifax Inc. learned about a major breach of its computer systems in March -- almost five months before the date it has publicly disclosed, according to three people familiar with the situation.

Update (2017-10-03): Sarah Buhr (via Hacker News):

In a continued effort to pass on any responsibility for the largest data breach in American history, Equifax’s recently departed CEO is blaming it all on a single person who failed to deploy a patch.


There’s a mantra at my company that you can’t assign blame for a problem to a particular person. If one person is capable of breaking your system, you have a bad system. The focus isn’t on finding the one person or the one mistake that caused it, but fixing the process so one person or one mistake can’t wreak that much havoc. I think it’s a very good philosophy.

Update (2017-10-27): Lorenzo Franceschi-Bicchierai:

Months before its catastrophic data breach, a security researcher warned Equifax that it was vulnerable to the kind of attack that later compromised the personal data of more than 145 million Americans, Motherboard has learned. Six months after the researcher first notified the company about the vulnerability, Equifax patched it—but only after the massive breach that made headlines had already taken place, according to Equifax’s own timeline.

Update (2017-11-08): Bruce Schneier:

Last week, I testified before the House Energy and Commerce committee on the Equifax hack. You can watch the video here. And you can read my written testimony below.

Update (2018-12-12): Adrian Sanabria:

The underlying conclusion throughout the Equifax breach report is that:

1. Staff was AWARE of deficiencies

2. Proper processes, tools and policies existed

3. Lack of leadership and accountability allowed processes to fail, tools to fall into disrepair and policies disregarded.

Update (2018-12-19): Bruce Schneier:

The US House of Representatives Committee on Oversight and Government Reform has just released a comprehensive report on the 2017 Equifax hack.