Dick Costolo, CEO of Twitter, has made headlines this week for his admission that "we suck at dealing with abuse and trolls" as part of an internal memo. While Twitter is unclear about how they currently moderate the platform, it seems that much of the removal of questionable content is done via users reporting it, rather than by Twitter employees actively engaged in seeking out abusive content and removing it.
Google has a similar fear of policing their content. Famously, they refused to take down a racist graphic of Michelle Obama in 2009, adding a message to the results page which explained they won't take down a link "simply because its content is unpopular or because we receive complaints concerning it". They went on to point to their algorithms (including a nerdy boast about the "thousands of factors" they make use of), which account for the "integrity" of their search results.
Twitter's own "rules" are explicit about their stance on this theme (emphasis mine):
We respect the ownership of the content that users share and each user is responsible for the content he or she provides. Because of these principles, we do not actively monitor and will not censor user content, except in limited circumstances described below.
The "limited circumstances" they mention are mostly around fake accounts and legal concerns, although there is one mention of "direct, specific threats against others" (which, it seems, is up to users to report or "local authorities" to bring to Twitter directly). Their use of "censor" speaks volumes about their current take on what it would mean to actively moderate the stream.
There's a brief parallel here with the web development community. Last year, Eric Meyer, a prominent web developer and industry figure, lost his six year old daughter Rebecca to a brain tumour. In response, a set of developers and friends proposed adding the alias "rebeccapurple" (a colour she loved) to the hex color #663399 as a tribute to her life (and his work). The change was accepted, but was not without controversy: on several technical discussions, people argued against the change:
I don't see why a project like Mozilla would allow an emotional response to work its way into actual code, and I think it's dangerous precedent. We don't need in-jokes, easter eggs, or memorials in established web standards, and I think it's irresponsible to suggest that this is the right course of action. – Jay P
For me, this comes down to the same thing as Google's refusal to overrule the all-powerful algorithm or Twitter's clear desire to avoid the messy business of hiring humans to police other humans. These are technology companies (or technical standards): there's an almost outright fear that any involvement of actual people making emotional, "logic-free" decisions is an admission that the whole system is imperfect and unworkable.
Why are we so afraid of the things that make us human accidentally making their way into our systems? As if the world we currently live in isn't shaped by humans and our emotion in its words, its social conventions, its standardisation and its measurements?
Maybe Dick Costolo will reconsider the claim that removing questionable content is "censoring" users. He might reflect instead that he has the power to shape the experiences of an entire social group—users of the web—and that allowing human emotion, empathy and compassion to shape that group might in fact be to its benefit, rather than passing the buck to computer code.
And maybe we're too complicated for mere algorithms, no matter how "integral" they are.