Tuesday, September 22, 2020

Scam Apps and Fleeceware

Dan Goodin:

Posing as apps for entertainment, wallpaper images, or music downloads, some of the titles served intrusive ads even when an app wasn’t active. To prevent users from uninstalling them, the apps hid their icon, making it hard to identify where the ads were coming from. Other apps charged from $2 to $10 and generated revenue of more than $500,000, according to estimates from SensorTower, a smartphone-app intelligence service.

The apps came to light after a girl found a profile on TikTok that was promoting what appeared to be an abusive app and reported it to Be Safe Online, a project in the Czech Republic that educates children about online safety. Acting on the tip, researchers from security firm Avast found 11 apps, for devices running both iOS and Android, that were engaged in similar scams.

Many of the apps were promoted by one of three TikTok users, one of whom had more than 300,000 followers. A user on Instagram was also promoting the apps.

[…]

Last month, researchers discovered more than 1,200 iPhone and iPad apps that were snooping on URL requests users made within an app. This violates the App Store’s terms of service.

Jagadeesh Chandraiah (in April, via Nick Heer, Slashdot):

In this latest round of research, we found more than 30 apps we consider fleeceware in Apple’s official App Store.

Many of these apps charge subscription rates like $30 per month or $9 per week after a 3- or 7-day trial period.

[…]

Many of the fleeceware apps we see are advertised within the App Store as “free” apps, which puts the apps at odds with section 2.3.2 of the App Store Review Guidelines, which require developers to make sure their “app description, screenshots, and previews clearly indicate whether any featured items, levels, subscriptions, etc. require additional purchases.”

Since iOS already requires apps to be sandboxed, the real protective value of the App Store is that in theory it won’t contain these sort of deceptive apps. But, for whatever reason, many of them seem to get through App Review and stay on the store for long periods of time.

Previously:

Update (2020-09-28): Simeon:

I’m baffled that Apple allows this. There are colouring books selling $600/yr subscriptions. They’ve tricked my parents who swore off paying for apps afterwards

It’s doing obvious damage to customer trust in the App Store, and it’s bad for every developer’s business

Rosyna Keller:

Despite what the article, headline, and lead graphic say, the source article clearly states the hidden app icons and full screen ads only apply to Android as Android allows apps to set those properties. iOS doesn’t.

15 Comments RSS · Twitter

To prevent users from uninstalling them, the apps hid their icon

Score one for Apple not allowing apps to change their icon without explicit consent from the user, I guess.

(I still don’t like that apps can’t set their icon more dynamically. This is a lot less relevant with iOS 14 widgets, but… it’s just one of those little things that Apple gets to do with their Clock and Calendar icons that third parties can’t.)

Many of these apps charge subscription rates like $30 per month or $9 per week after a 3- or 7-day trial period.

You’d think a rate of $9/wk triggers some yellow flag at App Review.

Issues like this are such a major disappointment - I’m all in favour of a walled garden as long as the garden is well tended.

Despite almost unlimited resources, Apple allow all manor of scams and fleecing to persist on the app store and the only conclusion to draw is that, because Apple make up to 30% from the scams, they are not so incentivised clean things up.

Which is such a shame and may undermine Apple’s defence against anti-competition allegations.

> But, for whatever reason, many of them seem to get through App Review and stay on the store for long periods of time.

But we all know the reason, don’t we?

Not driven by vision, not driven by quality, not driven by security...
Driven by spreadsheet.

>Since iOS already requires apps to be sandboxed

There is no actual sandboxing on iOS. iOS was not intended to run untrusted code, so Apple is blacklisting some stuff to try to prevent apps from breaking things, and calling that "sandboxing." This is not Java, or JS code running in a browser, where you have an actual sandbox that actually allows you to run untrusted code.

The reality is that Apple doesn't check the code of the apps in the App Store, so you're running untrusted code on a device that is not safe to run untrusted code. The App Store is actually making this worse by giving people a false sense of security.

The conclusion is always the same: Apple needs to be a lot more strict in what it allows into the App Store, and then allow sideloading for everything else. If people want to install scams on their iPhones, that's great, they should have the ability to do that (maybe they're security researchers, for example), but none of this stuff should be in the App Store.

(The logical conclusion from this is that the only apps actually suffering from sandboxing are legitimate apps.)

There is no actual sandboxing on iOS. iOS was not intended to run untrusted code, so Apple is blacklisting some stuff to try to prevent apps from breaking things, and calling that “sandboxing.” This is not Java, or JS code running in a browser, where you have an actual sandbox that actually allows you to run untrusted code.

The reality is that Apple doesn’t check the code of the apps in the App Store, so you’re running untrusted code on a device that is not safe to run untrusted code. The App Store is actually making this worse by giving people a false sense of security.

Of course iOS has sandboxing.

The App Store doesn’t just enforce rules at a social level (App Review) but also at a technical one (App Sandbox). Access to the file system, to IPC, to hardware, etc. is heavily restricted. You can’t, for example, leave out the prompt for Contacts access; if the user declines or you don’t ask, you simply can’t access that database at all because you’d never reach it in the file system.

There’s entitlements to reduce restrictions, but Apple decides which ones it grants, and do so in a file that has their digital signature, which the kernel verifies on launch.

(This isn’t a value judgment on whether sandboxing is good or not. I’m just confused why you’re so convinced iOS has no sandboxing.)

That's not sandboxing, that's blacklisting some APIs. You're obviously correct in the sense that Apple calls this "the sandbox", so there *is* something called "a sandbox" on iOS.

However, actual sandboxed platforms have a lot of features that iOS does not have. For example, Java doesn't just check a digital signature, it assumes that the compiler was malicious, and also verifies the actual bytecode before it is allowed to run (and of course, JavaScript isn't precompiled at all (in most cases), so that isn't a concern). There are a lot of things Java and JS do that Apple simply does not do. What Apple does is much closer to what Microsoft tried to do with ActiveX than to a proper sandbox: it's executing signed native code with some added security measures.

That went about as well for Microsoft as this is going for Apple.

A proper sandbox is designed to allow you to run untrusted code on your device. That's possible with JS and Java, because they have proper sandboxes, and were designed to be secure from the ground up. iOS was not.

That's why there are new zero-day security vulnerabilities where apps break out of the sandbox on iOS on a regular basis: it's not a proper sandbox. iOS wasn't designed to run sandboxed apps, and you can't just add a bunch of entitlements to specific system calls to get the same outcome.

Maybe at some point in the future, iOS will actually have a proper sandbox. But today, you can't run untrusted code on iOS. Sadly, the App Store is full of code you really shouldn't trust.

That’s not sandboxing, that’s blacklisting some APIs.

Which is enforced through a sandbox. Any file system, network, IPC, etc. call goes through it.

For example, Java doesn’t just check a digital signature, it assumes that the compiler was malicious, and also verifies the actual bytecode before it is allowed to run (and of course, JavaScript isn’t precompiled at all (in most cases), so that isn’t a concern).

JS is typically JIT-compiled now, just like Java. You’re right that Cocoa Touch doesn’t have a code access security-like notion, but it doesn’t need to since the lower levels already do. That kind of feature is now deprecated in .NET as well, because enforcing it at the kernel, like iOS does, is more effective. I don’t know about Java.

Maybe at some point in the future, iOS will actually have a proper sandbox. But today, you can’t run untrusted code on iOS. Sadly, the App Store is full of code you really shouldn’t trust.

I’m not sure what scenario you’re envisioning here, but literally the goal is to not run untrusted code.

If anything, iOS’s sandbox is too tight.

>I’m not sure what scenario you’re envisioning here,
>but literally the goal is to not run untrusted code.

If you don't run untrusted code, you don't need a sandbox. The purpose of a sandbox is to allow you to run untrusted code, because it's restricted by the sandbox. Given that Apple's "sandbox" can't be trusted to run untrusted code, it's not an actual sandbox. And given that 99% of the code in the App Store should be considered untrusted code, Apple has created a huge problem for its users.

>JS is typically JIT-compiled now, just like Java

BTW, this is irrelevant to the discussion, but you were missing my point. Java needs to evaluate the bytecode because somebody other than the entity executing the code compiled it. With JavaScript, the entity running the code also compiles it, so there's no reason to assume a malicious compiler.

Just saying "Apple's sandbox doesn't need to evaluate the compiled code since lower levels are secure enough" is the exact problem. That's not how you create secure systems. Secure systems are multi-layered, where an attacker has to defeat more than just one layer of security.

If you don’t run untrusted code, you don’t need a sandbox.

Fair enough.

Given that Apple’s “sandbox” can’t be trusted to run untrusted code

I still don’t know why you think so.

BTW, this is irrelevant to the discussion, but you were missing my point. Java needs to evaluate the bytecode because somebody other than the entity executing the code compiled it. With JavaScript, the entity running the code also compiles it, so there’s no reason to assume a malicious compiler.

Right, but Java’s sandbox has nothing to do with malicious compilers. It has exactly the same goal as Apple’s sandbox: to restrict access to system resources, such as the file system, devices, networking, etc.

Just saying “Apple’s sandbox doesn’t need to evaluate the compiled code since lower levels are secure enough” is the exact problem.

Java’s sandbox doesn’t “evaluate the compiled code”. It restricts which APIs can be called, which is exactly how Apple’s sandbox works as well, only at a lower level.

That’s not how you create secure systems. Secure systems are multi-layered, where an attacker has to defeat more than just one layer of security.

Yes, and on iOS, you have to defeat a lot of layers.

>>Given that Apple’s “sandbox” can’t be trusted to run untrusted code
>I still don’t know why you think so.

Because there are zero-day security vulnerabilities where apps break out of the sandbox on iOS on a regular basis.

> but Java’s sandbox has nothing to do with malicious compilers

Yes, it does. The bytecode verifier is part of Java's sandbox. That's why I'm saying Apple doesn't really have a sandbox: Apple's "sandbox" doesn't do the things that other things we call "sandboxes" do.

>Java’s sandbox doesn’t “evaluate the compiled code”

That's just not true. It does. That's what the bytecode verifier, which is part of Java's sandbox, does.

>Yes, and on iOS, you have to defeat a lot of layers.

Again, that's just not true. Most zero-days seem to involve exactly one security vulnerability. Usually, the security vulernabilities are so trivial that it feels like a six-year-old could have come up with them, stuff like "XML in some plist has incorrect syntax, which gives an app arbitrary persmissions."

If your "sandbox" allows for stuff like that, you need to stop calling it a sandbox.

I'm just reading this back and forth between Lukas and Soren (sorry no umlaut on this keyboard) and it's likely these arguments are going on inside the donut as well, and of course they do NOTHING to address the fact that the iOS "shocker" app "run in the sandbox" using "trusted code".. that's not the issue.

"Dark UX" or outright fraudulent claims that cannot be "caught" by automation are. Apple has a choice. Abandon all the other things they "police" and "curate" and police for FRAUD ONLY or just keep doing what they're doing, keep the FRAUD and collect 30% on those "US$8 shocks".

And Rosyna should be ashamed of that cope, it's not keeping FRAUD out of the AppStore, if she is on the security team mitigating FRAUD is part of her job. I shouldn't even have to say this, but Apple seems to have more NIH "Not Invented Here" than Microsoft ever had.

Rosyna is not a "she", fyi.

Leave a Comment