Have you ever edited a Wikipedia page? Ten years ago, I contributed to a few articles, mostly the one about Martin Hannett, the music producer known for his work with Joy Division; some of my changes are still there. At the time, Wikipedia was a Wild West of editorials and page-bombs. One of my friends from college made fun of it for its Shirley Temple page, which apparently was vandalized all the time back then.
In 2015, Wikipedia is much more tightly controlled. Tons of pages have the padlock icon on them and the rules for top editors are extensive. Still, there’s still some of that early-day idiocy intermingled with the recent seriousness. Not long ago, I tried to modify the page for the dance-pop band Chromatics, only to have my edits shot down with the helpful comment “Fail.”
Whatever; that barely registers in the landscape of Wikipedia edit wars. I remember Eric Clapton’s page once having, for example, in the first line, his full name followed by the parenthetical “(Not Clapp!)” Pages like the one on criticism of Islam have obviously been extensively edited, with defensive assertions that could probably never be overwritten, no matter the evidence.
Given its domain authority and Google’s increasing reliance on it, Wikipedia was always bound to attract a full range of writers and editors. Unfortunately, that crowd includes grammar pedants.
Today there was an article about someone who had made 47,000 edits to English-language Wikipedia pages just to go after the phrase “comprised of,” which is grammatically “wrong” because the “of” isn’t needed and “consists of” conveys the intended meaning. Not only is this individual – a software engineer named Bryan Henderson – a grammar stickler, but he also has an entire, hour-long weekly workflow set aside for the task of removing all instances of “comprised of,” aided by a script he wrote so that he can comb through millions of pages.
This all seems so tiresome. Stephen Fry once narrated a remarkable video demolishing grammar pedant (also known as “grammar hounds “or “grammar nazis”). He bemoaned the joyless relationship that these individuals had with language, how they were essentially caught-up in low-stakes, narcissistic tiffs rather than in any sort of linguistic expression or innovation. He pulled up a great quote from Oscar Wilde, from a note sent by Wilde to his editor:
“I’ll leave you to tidy up the woulds and shoulds, the wills and shalls, thats and whichs.”
Omg, Oscar Wilde made grammatical errors, right? Writers like William Shakespeare, were they living today, would also have to endure the idiocy of grammar pedants, who would seize upon lines like “There’s daggers in men’s smiles” as a crisis of interpretation. It seems that these pedants miss three things:
- Language changes all the time. Shakespeare turned “table” and “chair” into verbs, i.e., “tabling the motion” and “chairing the meeting.” Even words like “cockamamie” are the results of decades of accidental mutation, informed by mispronunciation. Language is not constant and doesn’t even evolve in a predictable way.
- Intent matters. It’s one thing to have no idea what someone is talking about because she chose the wrong words (or more likely, the wrong syntax). It’s another to sanctimoniously seize upon the subtle difference between “comprised” and “consists” and try to kill off for this line of linguistic change, when it’s clear what was meant and the “error” was one on par with any of the many made in daily conversation.
- There is just no relationship between command of grammar and ability to express oneself well in writing. If someone threw in a “None of these are relevant” into an otherwise good paper, I wouldn’t even care. Plus, who is being hurt by “incorrect” usage of language?
Solutionism and the control of language
The other issue I had with this article – and with the apparently legendary status that this one editor has achieved in the Wikipedia community – was the effects of his script on the otherwise (and perhaps still) tedious task of going after all the “comprised of”‘s in the world. Doing all of this by hand, hunting through each article, would be onerous, which is good! It would discourage people from doing stupid things like, for example, going after instances of “comprised of.”
Scripts, though, make it almost trivial to put all these harmless fires as they crop up. This is the dark side of software. While it makes some tiresome tasks much more easier, to everyone’s benefits (like searching for a recipe), it also makes others easy, to almost everyone’s detriment, like enforcing humorless hobbyist rules about grammar. In these cases, it’s providing a solution to a problem that doesn’t exist, but no one seems to realize this mismatch since software is always connoted as having the answer to something, a sort of solutionism.
Brief digression: One of the things I always liked about English, compared to, say, French, was its flexibility and lack of central control authority. The language can evolve based on what people actually say, rather than what some academy, whose members’ qualifications may have no overlap with creative or even practical usage of language, prescribes.
Grammar pedants don’t appreciate the organic character of language, which doesn’t surprise me; words, which have the power to reframe conceptions about society (think of the difference between using “the Internet of Things” and, say, “Surveillance Society” to refer to the same phenomenon), are shoehorned by pedants into being predictable tools for control.
The fact that resume are tossed out for typos or grammatical mistakes, and that Internet bandwidth is consumed by someone picking out instances of “comprised of,” is a tragedy and a byproduct of a society that hasn’t been widely educated in “useless” subjects like English or literature. Language is forced to play by the right/wrong rules and binaries of fields like computer science, and as such is diminished. Next time, just let that “comprised of,” “really unique,” or “10 times or less” go.
Microsoft has updated Bing so that it now pushes Klout results to the top of its many of its results pages. Ostensibly, this is a move to provide better content and to keep pace with Google’s own efforts at integrating Google+ results into Google Search. It also squares with Microsoft’s generally aggressive commitment to social search, which can be glimpsed in its relationship with Facebook and Facebook’s Graph Search functionality in particular.
“Microsoft believes that content is so powerful that is almost doesn’t matter whether Klout’s “experts” actually have any real expertise. If enough Klout users vote up an answer, it will still likely be a worthwhile addition to Bing results, Ripsher said.”
If one had any doubts about the internet’s objectivity or its “openness” (to use another overused adjective), then this peculiar development should allay them.
“The internet” is often characterized as an almost untouchable, coherent, self-contained system that can provide definitive knowledge and answers. The rise and insane hype around services like Quora and Klout are the current symptoms of this characterization, although it actually began long ago with Google and Wikipedia becoming (for relatively well-off internet users, at least: a relatively small portion of humanity) the go-to resources for queries, and with social networks then becoming echo chambers and in effect new realities for their respective users. As I have mentioned before, onlookers who regard these services in these ways seem to overlook the fact that the internet is actually a manmade thing and not a law of physics or deity.
On the contrary, the sheer volume of information available thru all of these channels in turn has led to the internet becoming, for many commentators, akin to the burning bush on Mt. Sinai, able to dictate authoritative wisdom at will, although it arguably one-ups even God’s favorite flaming plant, since much of that wisdom is “crowdsourced,” too. Now, the so-called crowdsourced structure of many online services – Google’s collection and subsequent application of user data, Wikipedia’s group editing, Reddit’s upvote/downvote system – is a hopeful development not because of the veracity of its content but because it, at the very least, shows that there are human agents who drive the internet, rather than some unstoppable, robotic force of nature that we often vaguely call “the internet.”
So how is that crowdsourcing intersects so snugly with the prevalent narrative of a self-driven internet? How is that search engines (the clearest, most obvious metaphors to a wisdom-producing computer from, say, Star Trek, yet another debt that tech owes to imagination and the liberal arts) are now, in many cases, conduits for social networks and other crowdsourced news? I don’t think it’s odd at all, actually, since it confirms that the internet, as a source of knowledge or truth, is just as subjective and contingent on human inputs as anything else. I mean, let’s look at some of the major drivers of internet content:
-Google: uses proprietary algorithms and integration with proprietary social networks (most notably G+). Results system can be gamed or “bombed” to promote certain results. All of this despite its promotion of “openness.”
-Twitter: proprietary social network that suggests certain celebrities or popular users to follow, primarily because said persons are the best evangelists for Twitter itself (as a tool/service).
-Klout: dependent on mostly amateur “expertise” and opinion, as noted above by The Verge.
So Microsoft is hardly putting anyone or anything newly “under the influence” of amateurs. The entire internet is built around these types of subjectivity that inevitably result from human input and tinkering.
-The ScreenGrab Team