But rather than giving apps a thumbs up or thumbs down solely based on an app’s technical quality – is it safe? will it steal your info? does it work well? – Apple goes a step further. Its guidelines don’t allow apps to be “mean-spirited” or show gratuitous violence (“realistic images of people or animals being killed or maimed”). No boobs. In other words, there’s a sense of morality to the App Store-fairly definite ideas of right and wrong. Which isn’t terribly surprising, if you’ve heard Steve Jobs talk about offering the world “freedom from porn”.
Part of that is simply good business: Apple doesn’t want people to perceive the App Store as a seedy place (which the Android Market kind of seems like sometimes!), or a place where kids can get their hands on stuff they shouldn’t. It’s family friendly, mostly. (Apps that could lead to bad stuff, like browsers, carry 17+ warnings and can be blocked via parental controls.) And it keeps regulators and congressmen off their back, at least in this respect.
But by making the approval of every app in effect a moral decision, making itself an arbiter of what’s good or bad, Apple occasionally places itself in an awkward position. Like, for instance, when Exodus International, one of those ministries that promotes “gay cures”, released an iPhone app. People complained that it was offensive (I don’t like it myself), and Apple removed it, saying it violates their “developer guidelines by being offensive to large groups of people”. Just last year, Apple repeatedly rejected the app Gay New York: 101 Can’t-Miss Places – basically a gay sightseeing app. The creator found Apple’s rejection of the “PG-13” app to be “homophobic and discriminatory to the point of hostile”, since, he claims, that “far racier photographic material is routinely available on other apps”.
Meanwhile, a handful of senators are calling on Apple to pull apps that allow users to self-report DUI (and speeding and other law enforcement) checkpoints, like Trapster and FuzzAlert. Which, on the face of it, sounds almost like a no-brainer-the apps facilitate breaking the law. On the other hand, the data is entirely user reported. Somebody could tweet all of it, theoretically. Should Apple take down apps comprised entirely of user-generated data? Both of those apps are still in the app store.
Who’s to say where it ends? Everybody finds something offensive, and everything offends somebody. What if a large contingent of people protested a Planned Parenthood app? Or, conversely, an app that listed Planned Parenthood clinics for people to better organize protests? Would Apple pull either of those?
Apple threw out the South Park app years ago because it was offensive.
I’d like to argue that Apple shouldn’t make any moral judgments – or very, very few in very obvious cases, like legit hardcore pornography or clearly illegal stuff – when it comes to apps. As horrible as I think Exodus International may be, I don’t like that Apple pulled it out of the App Store, nearly as much as I dislike the fact there still isn’t a South Park app for the same reason. It seems like the only fair way to make everybody equally happy (or unhappy) is to not make those kinds of judgments. Or at least not make them after the fact, which feels disingenuous – why approve the “gay cure” app in the first place? (Conversely, it’s a fair point that Apple does respond to criticism from time to time – like in this case, or reversing its censorship of an illustration of Ulysses.)
Obviously, the App Store is Apple’s world. They rule it, and can pull or approve whatever they want. They’ll keep selling tons of apps regardless of what they remove or don’t remove, or what some people find offensive. There’s little consequence for Apple in the end. But as long as Apple makes itself the arbiter of App Store morality, or concerns itself with what one group finds offensive versus another, this stuff will never stop.