Friday, August 18, 2017 [Tweets] [Favorites]

Gab App Rejected by Google (and Apple)

Timothy B. Lee:

On Thursday, Gab said that Google had banned its Android app from the Google Play Store for violating Google’s ban on hate speech.

Google’s e-mail doesn’t explain how Gab violated Google’s rules, and the company’s policy on the topic isn’t very specific. It says only that “We don’t allow apps that advocate against groups of people based on their race or ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation, or gender identity.”

[…]

Google is following in Apple’s footsteps here. Apple has long had more restrictive app store policies, and it originally rejected the Gab app for allowing pornographic content to be posted on the service—despite the fact that hardcore pornography is readily available on Twitter. In a second rejection, Apple faulted the app for containing content that was “defamatory or mean-spirited”—Apple’s version of the hate speech rule.

My understanding is that Gab is a social network/platform, with all the content/speech/advocacy generated by users. So these rejections don’t make much sense to me. I don’t think an app developer should be held responsible for the content of messages that the app’s users send to one another.

And look at the larger context of apps that are approved. Twitter, Facebook, Reddit, etc. have policies to restrict certain kinds of content, but in practice all manner of violations and abuse still occur. Those apps are not in any danger of being rejected. How are they different from Gab, or Micro.blog, other than being too big to ban?

There are also Web browser, e-mail, RSS, and podcast apps that provide access to totally unfiltered content—including from Gab’s site. And there are secure messaging apps that only transmit opaque encrypted content.

Banning apps based on user content seems to only inconvenience users without really protecting them, anyway.

Update (2017-08-18): Google clarified:

In an email to Ars, Google explained its decision to remove Gab from the Play Store: “In order to be on the Play Store, social networking apps need to demonstrate a sufficient level of moderation, including for content that encourages violence and advocates hate against groups of people. This is a long-standing rule and clearly stated in our developer policies.

Apple’s guidelines say:

Apps with user-generated content present particular challenges, ranging from intellectual property infringement to anonymous bullying. To prevent abuse, apps with user-generated content or social networking services must include:

  • A method for filtering objectionable material from being posted to the app
  • A mechanism to report offensive content and timely responses to concerns
  • The ability to block abusive users from the service
  • Published contact information so users can easily reach you

Apps with user-generated content or services that end up being used primarily for pornographic content, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature “NSFW” content, provided that the content is hidden by default and only displayed when the user turns it on via your website.

App Review listed some additional requirements:

[We] require that you act on objectionable content reports within 24 hours by removing the content and ejecting the user who provide the offending content.

This is rather vague. Who determines what’s objectionable? How many reports do you need to get before ejecting the user? How does Apple determine whether you are getting reports or acting on them? Obviously, there is lots of objectionable content on other social networks that remains for more than 24 hours.

Phil Schiller says:

“The App Store does not review or reject apps for political views. The App Store will not distribute apps designed to promote racism, pornography, or hatred.”

I’m not sure what “designed to promote” means here, since those things seem to be prohibited by Gab’s guidelines, which are not as different from Twitter’s as I expected.

Gab’s mission is to put people and free speech first. We believe that the only valid form of censorship is an individual’s own choice to opt-out. Gab empowers users to filter and remove unwanted followers, words, phrases, and topics they do not want to see in their feeds. However, we do take steps to protect ourselves and our users from illegal activity, spam, and abuse.

[…]

Users are prohibited from calling for the acts of violence against others, promoting or engaging in self-harm, and/or acts of cruelty, threatening language or behaviour that clearly, directly and incontrovertibly infringes on the safety of another user or individual(s).

Gab said in January that they try to enforce the guidelines but that it’s physically impossible to act within 24 hours or to know about violations that users don’t flag.

However, a little later Andrew Torba seemed to change their stance:

We refuse to “eject users” who post “offending content.”

And that makes Gab clearly in violation of Apple’s (unwritten) rules.

Update (2017-08-28): Alex Tabarrok:

I also fear that Google and Apple haven’t thought very far down the game tree. One of the arguments for leaving the meta-platforms alone is that they are facially neutral with respect to content. But if Google and Apple are explicitly exercising their power over speech on moral and political grounds then they open themselves up to regulation. If code is law then don’t be surprised when the legislators demand to write the code.

14 Comments

It's because Gab exists specifically to facilitate hate speech. Pretty sure it got its name from this: https://en.wikipedia.org/wiki/German_American_Bund

It's political

These rejections are here because it's a Twitter alternative. During this past week, we have witnessed unprecedented efforts by tech companies to mass de-platform and censor. Gab.ai is an alternative focused on free speech. Thus Google is colluding with their friends on Twitter to kill competition, as well as help maintain Silicon Valley's control over what is acceptable online discourse.

"These rejections are here because it's a Twitter alternative ... Thus Google is colluding with their friends on Twitter to kill competition"

This, of course, is complete nonsense. Google and Apple don't care if there is a Twitter alternative out there. It's utter gibberish to say this about trying to "kill competition".

But you are correct that "It's political", in the broadest sense of the term. Google is doing the exact same political thing here as they have been for a while when they ban or de-monetize Jihadi or Nazi people on YouTube. That's political too. That's Google making decisions on "what is the acceptable online discourse".

Now, we can debate both the effectiveness of these measures, as Michael does. And we can also debate the potential slippery slope of tech companies getting beyond de-platforming explicit violent hate groups like Jihadis and Nazis to de-platforming other types of political expression. But as long as tech companies don't go down that slippery slope, and remain focused on de-platforming Jihadis and Nazis, I don't think sane folks are going to complain too much.

> Thus Google is colluding with their friends on Twitter to kill competition

If your conspiracy theory involves Google trying to protect Twitter, you've made some kind of fundamental error in thinking along the way.

> My understanding is that Gab is a social network/platform, with all the
> content/speech/advocacy generated by users

Gab's freaking LOGO is Pepe the Frog. You really can't get much clearer than that. The Anti-Defamation League has it in their database as a hate symbol. It's used by mom's basement internet Nazis as a racist meme. There's no ambiguity here. The only problem is that it has taken Google this long to remove the app from its store.

@Chucky @Lukas I definitely agree that Google and Apple don’t care about protecting Twitter.

I don’t really understand this whole frog meme thing, how it became a Nazi symbol, or whether the people spreading it are on the same page about what it means. Officially, Gab says there’s no relation between their logo and Pepe. I don’t think I had even heard of Pepe when Gab was released a year or so ago. But clearly a lot of people now believe there is, so if they didn’t want that association they should have changed their logo. How can we tell if they’re trying to be a Nazi social network or if the Nazis are just going there because they’re getting banned from Twitter? Does it matter?

And should Apple and Google be judging apps based on the people building/marketing/using them rather than on the actual code/service? What if they fire the management team or sell the code or open source it and someone else uses it to run a social network? Should essentially the same binary then be accepted?

> or whether the people spreading it are on the same page about what it means

They're either Nazis or ironic Nazis (which, to a random observer, is the same thing). I haven't seen anyone use it without knowing what it means in about two years now.

> Officially, Gab says there’s no relation between their logo and Pepe

That's cute. Of course, if you go to any company and point out that their logo is a Nazi symbol, and it is actually a genuine mistake on their part, they change the logo. So that puts that to rest.

>How can we tell if they’re trying to be a Nazi social network or if the Nazis are
>just going there because they’re getting banned from Twitter?

Gab was founded with the explicit mission to provide a safe space for the kinds of right-wing hate speech that Twitter won't allow.

>Does it matter?

I don't think so. At some point, if it walks like a duck, swims like a duck, and quacks like a duck, it doesn't matter whether its innermost intentions are to be a duck, it just is a duck.

>And should Apple and Google be judging apps based on the people building/
>marketing/using them rather than on the actual code/service?

Yes.

On Google's side, I think there is zero problem at all. Google should ban whatever company they don't like for whatever reason they want (perhaps excluding some edge cases, e.g. when banning an app is a potential antitrust violations). In fact, they should probably ban about 99% of the crap that's currently in their store, just because it's crap. Since you can sideload apps on Android, the fewer things there are in the store, the better. The user experience of using a store where you know that 100% of the things you see adhere to some basic level of quality is infinitely better than what we get today on these marketplaces.

For Apple, it's a bit of a different story, because keeping an app out of the store effectively means preventing that app from being allowed to exist at all. Now, in this particular case, I don't have a problem with that. There are some ideas that just don't have a single valid reason for existing. There is no reason in the world why a private company that employs people from all over the world, and sells its products to people all over the world, should have to allow Nazis to advertise their destructive ideas on these companies' services.

And, again, I don't think there's any ambiguity here about what Gab actually is.

I understand that this looks like a slippery slope, and that there are a lot of useful, valid apps in the app store that seem to fall into a similar category as Gab. I also think that Apple should have a very high threshold for banning apps, because removing an app from the store means effectively killing that app. And I agree that most of the apps that Apple bans should actually not be banned. But I don't think banning this particular app is moving us down that slippery slope at all.

I think this is a clear case where banning the app was the right thing to do.

When Apple bans an app where there *is* any ambiguity about it purpose or use, I'm all for complaining about it. This just isn't one of these situations.

"I don’t really understand this whole frog meme thing, how it became a Nazi symbol, or whether the people spreading it are on the same page about what it means. Officially, Gab says there’s no relation between their logo and Pepe. I don’t think I had even heard of Pepe when Gab was released a year or so ago."

FWIW, the whole Pepe as a Nazi/racist meme was in full bloom when Gab started, even if you might not have noticed it if you weren't followed the campaign semi-closely. When you look at Gab's situation as a whole, I think it's essentially impossible to believe Gab didn't choose their logo deliberately for that reason.

-----

"And should Apple and Google be judging apps based on the people building/marketing/using them rather than on the actual code/service?"

In your update, you note about Schiller's statement: "I’m not sure what “designed to promote” means here."

"Designed to promote" is an explicit admission that the intent of the people building/marketing the apps matters, regardless of how Apple phrases their official guidelines.

Finally, do you remember Apple's ban of the app that did nothing but issue an alert when the US acknowledged a drone strike? The app's creator never got a coherent answer from Apple on why it was banned. But he cited in interviews the only thing in Apple's official guidelines he could find that could conceivably form a basis for the ban:

We will reject Apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it.” And we think that you will also know it when you cross it.

@Lukas I guess my main complaint with Apple here is that they don’t apply their rules equally. (Secondary complaint is that I’m not sure all the rules make sense.) If their problem is that Gab is a uniquely toxic app (e.g. the Daily Stormer/Cloudflare case), then just ban it for that reason and say so. Don’t keep tweaking the rules so that it’s always in violation and then ignore those rules for other apps.

I would personally prefer it if Apple took iOS in a direction like you say. Make the App Store much more restrictive so that has a much smaller selection of much higher quality stuff, and then add side loading.

@Chucky Yes, that’s a great example. I’m not sure I agree about the drone app being over the line. But I think it would be better to just call out the few apps as over the line rather than pervert the rules.

"Yes, that’s a great example. I’m not sure I agree about the drone app being over the line."

I certainly don't think the drone app is over the line. It's an app by an American civil libertarian that merely sends alerts of information publicly released by the US government. The only intent of the app that Apple could disagree with is its (rather oblique) attempt to criticize/embarrass US foreign policy using US government public information. It's a deliberate rejection of political speech that has nothing to do with promoting hatred, racism, violence, or any of the related ideas that Apple claims to reject political apps for.

I agree with Lukas that banning Gab is a pretty easy call, if you're ever going to ban anything at all for political speech. There's a reasonably solid consensus behind the non-desirability of Nazi/racist/Jihadi apps. But the drone app to me is an example of why any discussion of a "slippery slope" around Gab is nonsensical. Apple has already slid down that slippery slope.

"But I think it would be better to just call out the few apps as over the line rather than pervert the rules."

On one hand, sure. But from Cupertino's POV, the ambiguity of the rules has always been their greatest asset. And given the literal "I’ll know it when I see it" rule I quoted, they're not really perverting their rules. Their rules explicitly codify arbitrariness and subjectivity. I suppose they could actually cite the "I’ll know it when I see it" rule whenever they ban something like Gab, given that it's the genuine reason. But from a PR standpoint I think that would look far worse for them, even if it would be more honest.

@Chucky It’s definitely perverting the rules to use “I know it when I see it” but then cite a series of different rules as the reason. You’re probably right about the PR.

> But I think it would be better to just call out the few apps as over the line rather than pervert the rules.

I agree with that. I think it's a fallacy to believe that there can be a set of rules that perfectly govern what should be in the store and what should not be, if only because this is an adversarial situation. The people who want to put "bad" apps into Apple's store know what the rules are, so they can work around these rules, and create "bad" apps that don't violate the rules. Hence, the rules are never going to be sufficient for detecting and preventing "bad" apps.

It would be much more honest for Apple to just stick with the simple "this is our store, and we ban things we think are over the line" approach.

I think this also leads to better discussions about app bans. If Apple pretends that it's following a set of rules, the discussion inevitably becomes what these rules should be, and whether a specific app really violates these rules, which is not a useful discussion. So instead of discussing the value of something like the drone app, we're discussing which specific rule it might have violated. That's not useful. It is much more productive to talk about the actual app, and why it should be allowed in the store; doing this probably also puts more pressure on Apple to reevaluate its decisions.

And then, maybe, this would also put more emphasis on the idea that, since Apple can make arbitrary decisions about what goes into the App Store, maybe it's kind of harmful to not have any kind of official support for sideloading.

"I think this also leads to better discussions about app bans. If Apple pretends that it's following a set of rules, the discussion inevitably becomes what these rules should be, and whether a specific app really violates these rules, which is not a useful discussion. So instead of discussing the value of something like the drone app, we're discussing which specific rule it might have violated. That's not useful. It is much more productive to talk about the actual app, and why it should be allowed in the store; doing this probably also puts more pressure on Apple to reevaluate its decisions."

Which is precisely why Apple has, and will continue to obfuscate about its reasons for banning apps...

[…] Twitter Shutting Down APIs, Twitter’s Weeds, Gab App Rejected by Google (and Apple), App.net Is Shutting […]

Stay up-to-date by subscribing to the Comments RSS Feed for this post.

Leave a Comment