Monday, April 22, 2019 [Tweets] [Favorites]

The True and False Security Benefits of Mac App Notarization

Jeff Johnson (tweet):

Notarization is a kind of two-factor authentication. In order to notarize an app, you first need to sign it with your Developer ID cert, but then you have to submit it to Apple using the Apple ID and password of your developer account. If your signing cert is compromised, that by itself would no longer be sufficient to distribute the app.


A myth has been spread that Developer ID certs can only be revoked in entirety, meaning that all versions of all apps signed with a Developer ID cert would be invalidated when the cert is revoked. Apple has contributed a bit to this myth[…]


The ability of Mac apps to update themselves shows that the notarization malware scan is security theater. Apple’s notarization service scans for malware, but malware authors don’t need to submit malware to Apple! They can submit a perfectly innocent app for notarization, get the app notarized, and then flip a switch on their own server to download a malware software update when the victim opens the “innocent” notarized app. The downloaded malware update doesn’t need to be notarized, because the software updater will delete the quarantine attribute, thus bypassing Gatekeeper.

I guess the questions are:

I suspect that the answers are “no” and “yes.” Apple presumably believes otherwise. (They are surely aware of this loophole, and I don’t see why they would bother developing notarization if they didn’t believe in it.)

The malware scan is unlikely to catch serious malware authors, but it does punish legitimate developers, because they have to submit their apps and then sit and wait for Apple’s response, which Apple claims should take less than an hour (already too long), but in practice has taken much longer in some instances, according to developers I’ve heard from. Just yesterday, Apple’s Developer System Status showed 2 outages of 90 minutes each with the Developer ID Notary Service. The whole point of distributing software outside the Mac App Store is to avoid problems like these, submitting to Apple for approval and waiting for their response, but now Apple is imposing those very same problems on software outside the App Store. If notarization is to be required at all, I think it should skip the security theater of malware checks and simply notarize the app on submission, a process that would be almost instantaneous.

I’m not sure that the malware scan is the reason that notarization can sometimes take a long time, because I’ve had the same problem with “Processing for App Store” when submitting via App Store Connect.

Besides the notary service being down, mandatory notarization is risky for developers because code signing requirements can (and have) changed without warning and the malware scan might falsely block a legitimate app as malware. I’m not sure what you’re supposed to do in that case, but it would likely take a while. Developers know that when the App Store scanner falsely flags their app for violating a rule, contacting Apple through official channels rarely leads to a resolution. Instead, they have to act like an actual malware author and try to obfuscate their code to fool Apple’s tools.


Update (2019-04-23): Jeff Johnson:

“signing applications with your Developer ID certificate provides users with the confidence that your application is not known malware”

Isn’t that the exact same story we’re being told again with notarization? Fool me once, shame on you, fool me twice…

Update (2019-04-28): Todd Ditchendorf:

I fully appreciate the criticisms on this, but I can think of one good reason why Notarizing is not just Security Theater: It gives the responsible developer some “confirmation” that his app does not unintentionally contain malware. Like a mandatory virustotal dot com check.

Update (2019-04-29): Stephane:

Strangely, I tend to remember that Apple was not able to detect XCodeGhost by itself and prevent infected iOS apps from entering the App Store. So why should we believe they would be more effective with the notarization process?

Update (2019-05-09): Kyle Howells:

App notarisation I think is the biggest threat to the Mac remaining the open app platform we know today.

It can act as a Mac AppStore style, sandboxing and private API gate at the flick of a switch.

And we all just have to hope Apple will never flick that switch.

Update (2019-05-10): Jason Snell:

Yes, it’s possible that Apple could use this approach to ban most third-party apps outside of the App Store, but I don’t think that’s the intent. Instead, I think this is yet another example of how Apple wants to gain some of the benefits of App Store-style security without forcing every piece of Mac software through the Mac App Store.


"Can notarized malware convince users to update the app and download the actual malware?"

I find this question a bit strange. Malware isn't going to be polite enough to ask users if they want to update, it's just going to update whether the user likes it or not. ;-) Apps can download updates and relaunch themselves without any permission whatsoever. Even Sparkle supports automatic updates (as a convenience).

Most criticisms, including this one, of Apple’s security efforts focus on one scenario - malware somehow bypasses Notarization, Gatekeeper, XProtect, and MRT and then it can bypass Notarization, Gatekeeper, XProtect, and MRT. That is circular logic. No anti malware product has ever been perfect, yet that is what people demand from Apple. The goal is never perfection, but making the malware not worth anyone’s time or effort to do. The more skilled the malware developer, the less likely they will want to bother as there are other, more lucrative opportunities available.

Furthermore, malware scans can detect suspicious behaviour in executables and flag them for special attention. I don’t know if Apple does this or not, but I don’t know why so many people start with a base assumption that Apple’s security efforts are trivially easy to bypass. I know that many popular bloggers directly make that claim, but it just isn’t true. Where is the evidence for this? Where are the malware infections? I’m not talking about adware and scam ware, that’s not the same.

To me, Notarization is more of an Apple-funded competitor to VirusTotal. I’ve had a number of false positives in VirusTotal and a couple of stalkers who use those false positives as evidence that I am a malware author. I can tell you from experience that VirusTotal will categorically refuse to remove any false positive report. They sell a service where developers can upload apps and modify them in private until the dozens of AV engines no longer detect them. Apple is giving us that same service, for free.

@Jeff I guess you’re right. I don’t see those automatic updates because I block connections with Little Snitch.

@John What’s been presented seems like a trivially easy bypass. You haven’t presented any reason to believe that it wouldn’t work. If it does, it’s not “demanding perfection” to question what the point is. I don’t think deploying this would require much skill because (as I think you’ve noted) the malware/adware apps are not bespoke; they’re built using toolkits. So all it takes is for one developer to update the toolkit.

I believe Apple has stated that notarization does not do runtime behavior analysis, and that is one reason it’s supposed to be faster than App Review. In any event, there wouldn’t be any suspicious behavior from the notarized trampoline app, anyway.

I don’t see much evidence of current infections, either. But you could also use that to argue: if making malware is so difficult or Apple’s effects are already so effective, why do we need mandatory notarization?

"because they have to submit their apps and then sit and wait for Apple’s response, which Apple claims should take less than an hour (already too long)"

Wait a second. Does this mean that you can't integrate the notarization step in an automated build workflow, contrary to the codesigning step? Seriously?

@John "I don’t know why so many people start with a base assumption that Apple’s security efforts are trivially easy to bypass."

Could it be related to the fact that even before Mojave was released, a security flaw in the stupidly annoying FDA mechanism was already found? Or that a security flaw in secure kernel extension was also found a few days after High Sierra release?

So, no it's not trivial to every one. But to the usual suspects when it comes to finding security flaws in macOS/iOS, it certainly does seem trivial.

@someone You can integrate it, but it’s not synchronous like codesign, and the builds will take an unpredictable amount of time. Basically, you run one tool to get a request ID, and then you have to poll and keep checking whether the request has been processed, and then either staple or fail the build.

@Michael Then I can only interpret this as "Apple wants to kill the Mac platform".

I think its crazy it can’t be instant to allow part of the automated build process.

Of course it can be bypassed initially. That is true of most AV software. A malware developer could sign the installer and then the installer could download more malware. But Apple can also update its protections and stop subsequent installations. Yes, the malware developer can update the code and release a new version. But Apple can do that too. That has been the standard AV software design since time immemorial and people suddenly proclaim it pointless when Apple implements it. Would it be better if Apple released a more traditional, client-side AV module with updating logic (that malware could block) and constant hard drive scanning? Apple’s approach sure seems more efficient to me.

I didn’t mean to imply there was any runtime behaviour analysis. I don’t know what Apple’s doing on the back end. But there is no reason why Apple can’t do static analysis and flag apps as allowed, but suspicious, and keep an eye on them. Apple is doing just that by applying the new behaviour only to new developers. Perhaps there is other metadata associated with the submission process that is part of the equation.

I am seeing more adware signed with developer accounts created under fake names. I assume this is what Notarization is designed to target. If Apple did nothing, then eventually there would be a significant malware incident and people would be complaining that Apple didn’t do anything to stop it. Well, Apple is doing something.

@John But isn’t “initially” the only time that matters for notarization? The point is to catch the app before it can do anything. Otherwise, it’s basically “Developer ID + Apple gets a copy of the binary.” Gatekeeper can already stop stuff after the fact. The question is, what benefit does notarization provide over Gatekeeper + XProtect? And is it worth the tradeoffs?

@Michael I suspect the “Apple gets a copy of the binary” may be a big part of this design. This gives Apple a copy of every app that anyone wants to distribute. Think of all that Apple can do with that information.

It is also annoying that Apple lets you notarise “legacy” applications, but their limit on “legacy” excludes currently shipping versions. I managed to notarise all my old applications going back to version 3.x, but I could not notarise the two most recent versions (8.2.3 and 8.2.4) because they are not “hardened”. If Apple thinks I am going to re-sign or otherwise in any way muck with the shipping version, they are out f their minds.

Also note that you don't actually have to attach the notarisation to your app to distribute, that is only to allow for the case when the target Mac does not have Internet access. So you could submit it to Apple and then ignore the result and post it anyway and hope for the best. At least until Apple requires notarisation, and at least for the case where you are updating for a bug fix (for example, you could in theory make the update version available before the notarisation completes).

@John yes, nothing is perfect, and yes we expect a lot from Apple, but also security has to be good enough to be worth the cost in time and effort it provides. You lock your front door because it makes it harder for a burglar to get in, but you don't nail it shut each night. Of course Apple’s approach is more efficient because it doesn't actually scan the things that need to be scanned.

Disinfectant by John Norstad was the best anti-virus application ever produce (IMHO). It worked by patching the system exactly where the virus made its first move, detected the virus and shut it down. It only worked with known viruses, but because the Mac is relatively secure, and because John was very good and very fast, that meant that long before any virus propagated to your Mac it could detect it. And it had essentially no footprint - it used almost no resources.

Apple is in a position to do that sort of thing too. But they have chosen differently and part of that is their motivations: lock the Mac down like an iPhone and control third party OSX developers like they can with iOS.

@Michael, I mean initially in terms of the malware’s distribution, not in terms of an individual machine. Here, Apple’s approach is similar to that of other AV vendors. It just sucks to be one of the first few people to encounter the malware. But that is where MRT comes in. Once Apple finds and stops the malware, MRT can go back in and clean up the mess on the machines of those unlucky folks.

@Peter, sorry, but I don’t accept anyone just throwing out blanket statements like “it doesn’t actually scan the things that need to be scanned”. Do you have some specific things in mind? If so, what are they?

@John Again, what you’re describing doesn’t seem to relate to notarization. Apple could already stop it afterwards with Gatekeeper + XProtect.

It doesn’t actually scan apps that aren’t quarantined. That’s what the original post was about.

@Peter I have fond memories of Disinfectant, and I think that may be the last time I encountered a Mac virus.

Gatekeeper blocks malware (and pretty much everything, really) before it is installed. XProtect blocks malware that somehow bypasses Gatekeeper before the malware can be executed. MRT removes malware that was able to get past both of those.

Notarization is simply an acknowledgement that the credit card industry, upon which the $99 Developer Membership and Developer ID certificates are based, is fraudulent. The recent increase in signed malware that I’ve been seeing have all been using personal developer accounts, not business ones.

The problem with that circular logic is that any app that downloads malicious software would reasonably be considered malicious itself. Therefore, it would be subject to the same kinds of malware scanning and could be rejected by notarization.

@John "Gatekeeper blocks malware (and pretty much everything, really) before it is installed"

Nope. It's just a road block when you open a document or execute a binary. The item is already on disk. a.k.a installed.

"XProtect blocks malware that somehow bypasses Gatekeeper before the malware can be executed."

By bypassing, do you mean the user using the contextual menu in the finder and choosing Open… ? Or the user executing xatrr -d in the terminal to remove the annoying quarantine flag?

That’s correct. The item is on disk, but will not execute or install (depending on the item) without confirmation.

XProtect is different. You have to boot into recovery mode and turn off SIP, and maybe some other stuff, if you want to bypass XProtect. I’ve done it once, but I can assure you that XProtect is a hard stop that users can’t reasonably bypass even if they want to.

Just to chime in on the build process with notarization.

Both my apps (Acorn and Retrobatch) have integrated notarization into the build process. It was annoying to setup, but it's been pretty smooth since then (I first added it a couple of months ago).

I wrote a tool that gets called from the build script, which uploads and waits for the results of the notarization (by polling every 30 seconds or so). I just ran a build to see what the current wait time is, and it was 3 minutes and 13 seconds.

Is the kind of malware this could conceivably prevent even a threat average Mac users should worry about?

We live in a world where legitimate apps upload a ton of your personal data to their servers, and then accidentally leak them to the world. That's the real threat, and this does nothing to combat that.

The actual value this provides seems completely out of proportion with its cost.

There's an additional aspect of notarization which I think might actually be more important than the "malware pattern check".

It's the Hardened Runtime: notarization requires that the executable use it. (There are exceptions for "legacy" submissions; e.g. I submitted BBEdit 11.6.8 for notarization and there was no problem.)

By default, the hardened runtime mandates certain security-conscious restrictions: no executable memory, no manipulation of the dyld environment (relevant to load paths); no loading of code (plug-ins) signed by other developers; no access to microphone, camera, contacts, calendars, location services, photos, or Apple Events.

All of these things can be modified via entitlements (in some cases paired with property list values). The entitlements also trigger user interaction in cases where user data is potentially at risk (" wants to access your Contacts"). And of course there are issues that have been discussed (user fatigue, the AEpocalypse, and so on). But there are also some tangible security benefits: in particular, it means that a notarized app can't do these things silently (or at all).

All this to say: I think it's a mistake to fixate on the "malware detection" aspect of notarization, because the implications of the hardened runtime restrictions might actually be more significant in the long run.

Rich, I don't think anyone denies that there are security benefits to the hardened runtime. However, I also don't think it's a mistake to fixate on the malware detection aspect of notarization, because

1) Enabling the hardened runtime doesn't require that you upload your app to Apple and get approval in order to distribute. This is by far the most controversial aspect of notarization.
2) As you acknowledge, the hardened runtime is not technically a requirement of notarization.
3) In my opinion, Apple's main goal for notarization is to display "We checked this app for malware" to users in the Gatekeeper dialog. Apple couldn't explain the hardened runtime to users even if they wanted to, since it's far too technical, and anyway developers can check all the boxes in Xcode to disable most of the protections.

@Rich I think you’re right that the hardened runtime is really important. But AFAIK it’s separate from notarization. Apple could have simply mandated that Developer ID apps be hardened, and that would have provided the same benefits (and generated far fewer complaints). The benefits due to notarization itself are the ones mentioned in the original post, plus Apple getting a copy of the binary (which lets them do more analysis than client-side malware detection).

Interesting twist on my previous comment:

>It is also annoying that Apple lets you notarise “legacy” applications, but their limit on “legacy” excludes currently shipping versions. I managed to notarise all my old applications going back to version 3.x, but I could not notarise the two most recent versions (8.2.3 and 8.2.4) because they are not “hardened”. If Apple thinks I am going to re-sign or otherwise in any way muck with the shipping version, they are out f their minds.

Since my failed attempt to notarise version 8.2.4 and 8.2.3 they have spontaneously become notarised. So clearly Apple has kept the data, adjusted the requirements, rerun the notarisation system and added them to the database of notarised signatures. The original process log still says they failed notarization, and I never got an email about the change, but clearly Apple can take an app from unnotarised to notarised as well as the reverse.

At least it is one less thing on my todo list now!

Stay up-to-date by subscribing to the Comments RSS Feed for this post.

Leave a Comment