Thursday, September 6, 2012

Code by Kevin Leaves the Mac App Store

Kevin Walzer:

These apps have since been rejected three times by the App Store reviewers, and at least part of the problem was the permissions I was requesting for these apps in the sandbox environment. Specifically, the sandboxing environment won’t let me call a system command-line tool to launch Safari (to view my product’s web page) or Mail (so the user can contact me with questions).

This may be a small thing, but to me it’s the proverbial straw that breaks the camel’s back. I’ve used this functionality in my apps for years, I regard it as basic—the ability to contact me via an item in the help menu is simple customer service—and I am unwilling to remove it. Restricting such basic system access is simply ridiculous.

My guess is that he could get this particular functionality working in the sandbox by rewriting his apps to use a different API. However, one of the problems with the sandbox is that adopting it is an open-ended process. You can get one thing working and find that something else doesn’t work—either due to the sandbox’s design or a bug—or that it behaves differently on a different version of Mac OS X. Little of this behavior is documented.

This is also going to mean a change in my development processes. For the past couple of years I have had a cramped and limited view of what my apps could do; I wanted to make sure they did not run [afoul] of App Store guidelines. No more. I will go back to developing the way I prefer: taking full advantage of the Mac’s powerful resources.

Update (2012-11-03): Kevin Walzer:

None of my deep frustrations with the Mac App Store have changed—its slow review time, the technical limitations that sandboxing imposes, and more. But a rational assessment of where my sales comes from says that I can’t ignore the Mac App Store—it truly does enable sales that I couldn’t achieve elsewhere.

12 Comments RSS · Twitter

"My guess is that he could get this particular functionality working in the sandbox by rewriting his apps to use a different API."

Indeed. Though IIRC Kevin likes to use Tk rather than Cocoa, so LaunchServices' LSOpenCFURLRef might be more up his street.

"However, one of the problems with the sandbox is that adopting it is an open-ended process."

Sure. Apple are trying to close the stable door years after the horse has bolted. Frankly, OS X and Windows have _never_ deserved the label 'consumer OS', not from a safety perspective. Cleaning up that mess now is inevitably messy, unpleasant and slow (especially when the OS vendors are still trying to figure it all out themselves).

On this particular occasion though, I am going to say Apple is 100% in the right and Kevin completely in the wrong. If you're going to flounce out the app store, at least do it over a legitimate grudge, as opposed to merely being called out on your own sloppy work.

Shelling out to 'open' just to fire off a URL is pure cowboy coding, and Apple are absolutely right not to respect such hackery. As soon as an application demands access to the entire shell environment, it's creating a huge potential attack area, not to mention copious opportunity for ordinary snafu. (Think of all the times some 'simple' shell script has gone hilariously wrong due to some genius developer failing to quote every variable so it expands appropriately. All it takes is a single file path with a completely legitimate space in it...) Calling a dedicated, readily available URL opener function makes your program's intent absolutely clear (essential if it's to be sandboxed effectively) and minimises opportunities for things to go awry.

Nobody's telling developers what they can or can't do on their own machines, but as soon as you develop code for _other_ people to run on theirs, you owe them a basic duty of care. Following best practices and coding defensively should be the absolute baseline, not an optional extra.

has, if I wanted to avoid the shell environment, I'd be coding on Windows. I am perfectly capable of dropping down into C to call an appropriate API, but this just strikes me as ridiculous. Why does Apple include CLI tools like "/usr/bin/open" if they are not to be used? The Unix foundation is, or used to be, one of the selling points of OS X.

"Shelling out to 'open' just to fire off a URL is pure cowboy coding, and Apple are absolutely right not to respect such hackery."

Because it's well known that Apple would never have allowed a website to execute a command in the Terminal at one point in time… I just keep forgetting Safari has always been an Apple's product.

To be less sarcastic:

The issue is that Apple wants developers to move from a 100% open to a 100% close platform (from a safety point of view) in one step while they indulge themselves more time to make the transition.

That Apple are slovenly hacks too and often fail at eating their own dog food is not news. Sure they should be going first to clear the way for everyone else, but welcome to business: a tank of ravenous sharks deciding who'll be dinner first.

However, Apple's own double standards do not change the fact that security is not something you can do incrementally. There's no such thing as '80% secure' - a system is either secure or it's not. If you really want to rag on Apple, take them to task for waiting too long to start on the problem; like, 10-20 years too long. (Ditto MS and anyone else who's ever claimed to sell a 'consumer OS'.)

Blocking a publicly distributed application from accessing the shell isn't about infringing the developer's rights, it's about protecting the user's. A developer's right to do whatever they damn well please ends at their own machines.

The fundamental problem is that the software industry, unlike every other type of product manufacturer, isn't held fully accountable whenever their product screws up. Were software vendors held legally liable every time their product ate a user's data, they'd be _first_ in line begging for someone else to take that responsibility off them. It's a whole industry of cowboys, dodging difficult problems for short-term expediency while racking up all sorts of long-term penalties that will have to be paid for eventually. But as long as it's someone else who gets dumped with that pain they can continue not caring.

Realistically though, this cannot go on - the whole stack of cards will eventually collapse under its own incompetent weight. The slow realisation by 'consumer OS' manufacturers that the user's own desktop is a thoroughly insecure, untrusted environment and must be treated accordingly is one small step - if badly belated - in the right general direction.

"Blocking a publicly distributed application from accessing the shell isn't about infringing the developer's rights, it's about protecting the user's. A developer's right to do whatever they damn well please ends at their own machines."

Meh.

A publicly distributed app that accesses the shell should be labelled as such by the OS. At that point, an educated user can make a decision on whether or not to use the app.

In a more perfect world, Apple would mark apps that have no access beyond the sandbox as safe, spotlight them, and make them easy to install. But at the same time, they could allow apps that have broader access, put up lots of warnings about them, and force the user to click thru lots of warning boxes before installing them.

They could even ship the MAS app to only display 'safe' apps by default, with a pref to allow users to see apps that want broader access. That way, 'pro users' could get access to more ambitious apps thru a regulated mechanism that would allow them to see exactly what an app that wants broader access is allowed to do.

You really can do security incrementally if you provide a sandbox for the kids to play in while providing regulated details for 'pro users' who want more useful apps, but still want them to only do what they claim they can do. But all that requires Cupertino to care about 'pro users', and I think we've known that's a losing battle for a while now.

There are many ways to skin the security cat, but not all of those ways need to be as infantilizing as Cupertino's current path. (And a goodly part of the reason for Cupertino's current security approach is that it's about things other than security, of course.)

@has

Shelling out to 'open' just to fire off a URL is pure cowboy coding, and Apple are absolutely right not to respect such hackery. As soon as an application demands access to the entire shell environment, it's creating a huge potential attack area, not to mention copious opportunity for ordinary snafu.

Kevin’s app is not demanding access to the entire shell. As far as I know, it doesn’t use “sh”. He just wants to execute the “open” command. Apple could have made this command work in the sandbox—what it’s doing is approved for the sandbox via other APIs—but didn’t.

Secondly, I know you know this but it isn’t clear from your comment: the sandbox protects you even when an application shells out. If a shell script tries to do something that wouldn’t be allowed via direct APIs, the sandbox blocks it. That’s why 10.8 introduced NSUserUnixTask, for executing shell scripts that run outside the sandbox.

Chucky: "A publicly distributed app that accesses the shell should be labelled as such by the OS. At that point, an educated user can make a decision on whether or not to use the app."

Users are educated in the same way that developers are above average. (See also: Dunning Kruger.)

The problem is that the user has to be right 100% of the time while the malware author needs them to be wrong only once. Even the smartest user can have a momentary lapse. Furthermore, once a bunch of legitimate apps have conditioned a user to think that granting shell access is a-ok, it makes it that much easier for a non-legit app to gain that same trust.

Rule #1 of defensive programming: minimise the surface area where screw-ups (including malicious acts) can occur. Don't use a wide API where a narrow one will do; don't write a temp file where a pipe can be used, etc.

"In a more perfect world, Apple would mark apps that have no access beyond the sandbox as safe, spotlight them, and make them easy to install. But at the same time, they could allow apps that have broader access, put up lots of warnings about them, and force the user to click thru lots of warning boxes before installing them."

This approach doesn't work. More warning boxes just makes users click through them more quickly. You can't honestly call a product a 'consumer OS' if it drops the consumer on the floor and then blames them for it. But that's what MS and Apple do, and a disgraceful percentage of geeks and nerds think that is just peachy (well, I suppose it gives them someone to look down on).

"But all that requires Cupertino to care about 'pro users', and I think we've known that's a losing battle for a while now."

Indeed. The only pro users Apple really needs to care about are the ones that put apps in their app store. But you know, Apple are a business, not your personal friend. If they can gain a billion dollars' worth of new users by blowing off a million dollars' worth of old users, of course they're going to do it. Don't like that? Find another platform. I run Linux, OSX, Win7 and iOS myself - hate 'em all, but at least it's in different ways so it kinda balances out a bit.

The vast majority of personal computing users should be on a heavily curated platform for their own sake and each others'. This is not really any different to the pre-PC days, when every system was managed by a professional administrator and every advanced user was technically adept. For most users, the tool itself is not interesting, only what it lets them do; being required to do technical risk assessments and security audits is as far from their list of interests as it gets.

Frankly there are a lot of incredibly hard and often ugly issues that need to be worked through, including absolute fundamentals like freedom, privacy, business/government/criminal intrusion, responsibility, liability, etc. Issues that should've been faced up to from day one, not swept under the rug for the next quarterly report several decades and counting. And with all parties trying to game the system to their own ends at every single turn. But that doesn't mean the issue can be dodged any longer, and OS and app vendors who continue to do so are doing their users an almighty disservice, if not outright wrong.

Like I say, if software vendors had always carried the same legal liabilities as Ford or Merck, we wouldn't even be having these discussions today. It's a mess.

@Michael: I believe part of the problem is that Tcl likes to write temp files for this sort of activity. And temp files are a classic attack vector, so given the choice between an approach that uses them and an approach that doesn't, it's an absolute no-brainer which should be used. Not using a feature you don't absolutely require means you don't even have to consider what the security implications of using it might be, never mind checking for sure you've covered every last potential hole: the design is safe by default.

It's just good policy. Remember, the sandbox is an assist, not a magical-pixie-dust fix to all PC security. It remains a coarse and possibly imperfect regulatory mechanism, so the fewer and narrower the exceptions an application requires, the more confidence one can have that nothing's been missed.

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." -- C A R Hoare

@has

I believe part of the problem is that Tcl likes to write temp files for this sort of activity.

I did a simple test, and that doesn’t seem to be the case here. It would be interesting to hear from Kevin what the specific problem was.

I agree about minimizing the surface area, but there are tradeoffs. For example, one issue I’ve encountered recently with SpamSieve (not sandboxed, but a future version possibly could be) is that occasionally getting admin privileges really helps. Some users have folder permissions that are all messed up, which breaks the app. If not sandboxed, SpamSieve can fix them pretty transparently. The alternatives are all pretty ugly.

"The fundamental problem is that the software industry, unlike every other type of product manufacturer, isn't held fully accountable whenever their product screws up. "

Yes and no. Sandboxing is not here to prevent an application from screwing up. It's there to prevent an application from doing what it's not supposed to do. If it's supposed to be able to update documents and an update screws the data, sandboxing won't help.

"Like I say, if software vendors had always carried the same legal liabilities as Ford or Merck, we wouldn't even be having these discussions today. It's a mess."

Are the following requirements supported by all Ford vehicles:

- require the driver to authorize any first-time passenger
- prevent passengers from littering
- prevent the car from being stolen and used by criminals to rob banks?
- be bullet proof
?

@Michael: There were two issues. One, I didn't bless /usr/bin/open with the appropriate entitlement. That was no big deal in itself. The other issue was that /private/tmp is disallowed by the sandbox, and Apple's reviewers said that directory would not be given an entitlement. Tcl does indeed write a temp file to that directory when opening a pipe or shelling out ("exec /usr/bin/open apple.com"). That's a bigger issue, from a technical standpoint. I can work around it in one instance via a wrapper for LaunchServices, but that's not a long-term solution. So, I'm a bit stuck from that standpoint.

Leave a Comment