There’s been a recent surge in attention given to a relatively obscure British journalist’s thoughts on headline writing. “Betteridge’s Law” is the informal term for the argument that any (usually technology-related) headline that ends in a question mark can be answered “no.” Betteridge made his original argument in response to a TechCrunch article entitled “Did Last.fm Just Hand Over User Listening Data to the RIAA?”
The reason that so many of this rhetorical questions can be answered “no” comes from their shared reliance on flimsy evidence and/or rumor. The TechCrunch piece in question ignited controversy and resulted in a slew of vehement denials from Last.fm, none of which TechCrunch was able to rebut with actual evidence. John Gruber also recently snagged a prime example in The Verge’s review of Fanhattan’s new set-top TV box, entitled “Fan TV revealed: is this the set-top box we’ve been waiting for?”
So we know what Betteridge’s Law cases look like in terms of their headlines, which feature overzealous rhetorical questions. But what sorts of stylistic traits unite the body of these articles? Moreover, why do journalists use this cheap trick (other than to garner page-views and lengthen their comments sections), and what types of arguments and rhetoric do they employ in following-up their question? I am guilty of writing a Betteridge headline in my own “Mailbox for Android: Will Anyone Care?,” which isn’t my strongest piece, so I’ll try to synthesize my own motivations in writing that article with trends I’ve noticed in another recent article that used a Betteridge headline, entitled “With Big Bucks Chasing Big Data, Will Consumers Get a Cut?”
Most visibly, Betteridge’s Law cases employ numerous hedges, qualifiers, and ill-defined terms, some of which are often denoted by italics or scare-marks. By their nature, they’re almost invariably concerned with the future, which explains the feigned confusion inherent in the question they pose. That is, they act unsure, but they have an argument (and maybe even a prediction to make). Nevertheless, they have to hedge on account of the future not having happened yet (the “predictions are hard, especially about the future” syndrome), or, similarly, use conditional statements.
I did this near the end of my Mailbox article, saying “This isn’t a critical problem yet, or at least for as long as Google makes quality apps and services that it doesn’t kill-off abruptly, but it will make life hard for the likes of Mailbox and Dropbox.” My “yet” is a hedge, and my “it will” is the prediction I’m trying to use to establish more credibility. In The Verge article linked to by Gruber, the authors say “IPTV — live television delivered over the internet — is in its infancy,” strengthen that with “Meanwhile, competition for the living room is as fierce as it has ever been,” and then feebly try to make sense of it all by saying “At the same time, if it matches the experience shown in today’s demos, Fan TV could win plenty of converts.”
Delving into the aformentioned article about “big data,” we find similarly representative text:
- “You probably won’t get rich, but it’s possible”
- “But there’s a long road ahead before that’s settled”
- “Others aren’t so sure a new market for personal data will catch on everywhere”
- “not as much is known about these consumers”
- “That’s a big change from the way things have worked so far in the Internet economy, particularly in the First World.”
- “big data”
This headline is really a grand slam for Betteridge’s Law. Simply answering “no” means that you believe that corporations specializing in data-collection won’t be all that generous in compensating their subjects for data that they’ve possibly given up without even realizing that they’ve done so. After all, lucid arguments have been made about how Google in particular could be subtly abetting authoritarianism via its data collection, which if true would constitute a reality directly opposed to the fairer, more democratic world proposed by advocates of data-related payments. To the latter point, Jaron Lanier has argued for “micropayments” to preserve both middle-class society and democracy in the West.
The article examines mostly nascent data-collection and technology companies and ideas whose success or failure is so far hard to quantify and whose prospects remain unclear. Accordingly, the author must use filler about the weak possibility of becoming rich, the cliché of a “long road ahead,” and the admission that many consumer habits are a black box and that maybe not all consumers are the same. Even the broad “consumers” term is flimsy, to say nothing of the nebulous term – “big data” – that the article must presuppose as well-defined (I have argued that it is not so well-defined) to even have a workable article premise.
For additional seasoning, the article resorts to the outmoded term “First World” (a leftover from the Cold War) and the ill-defined “Internet economy.” I think I know what he means with the latter: the targeted-ad model of Google, Amazon, and Facbook. But the vacuity of the term “internet” leaves the door open: would Apple’s sale of devices that require the internet for most functions count as part of the “internet economy,” too, despite having a different structure in which users pay with money rather than data?
Like many Betteridge-compliant headlines, the accompanying article isn’t a contribution to any sophisticated discussion of the issues that it pretends to care about. Hence the teaselike question-headline; Betteridge’s Law cases pretend that they’re engaging in high discourse, perhaps in the same way that the valley girl accent – riddled with unusual intonations cadences that throw off the rhythm of its speaker’s sentences and draws attention away from content – pretends it is partaking in real conversation. Perhaps we really should bring back the punctus percontativus so we can see these rhetorical questions for what they really are.
Microsoft has updated Bing so that it now pushes Klout results to the top of its many of its results pages. Ostensibly, this is a move to provide better content and to keep pace with Google’s own efforts at integrating Google+ results into Google Search. It also squares with Microsoft’s generally aggressive commitment to social search, which can be glimpsed in its relationship with Facebook and Facebook’s Graph Search functionality in particular.
“Microsoft believes that content is so powerful that is almost doesn’t matter whether Klout’s “experts” actually have any real expertise. If enough Klout users vote up an answer, it will still likely be a worthwhile addition to Bing results, Ripsher said.”
If one had any doubts about the internet’s objectivity or its “openness” (to use another overused adjective), then this peculiar development should allay them.
“The internet” is often characterized as an almost untouchable, coherent, self-contained system that can provide definitive knowledge and answers. The rise and insane hype around services like Quora and Klout are the current symptoms of this characterization, although it actually began long ago with Google and Wikipedia becoming (for relatively well-off internet users, at least: a relatively small portion of humanity) the go-to resources for queries, and with social networks then becoming echo chambers and in effect new realities for their respective users. As I have mentioned before, onlookers who regard these services in these ways seem to overlook the fact that the internet is actually a manmade thing and not a law of physics or deity.
On the contrary, the sheer volume of information available thru all of these channels in turn has led to the internet becoming, for many commentators, akin to the burning bush on Mt. Sinai, able to dictate authoritative wisdom at will, although it arguably one-ups even God’s favorite flaming plant, since much of that wisdom is “crowdsourced,” too. Now, the so-called crowdsourced structure of many online services – Google’s collection and subsequent application of user data, Wikipedia’s group editing, Reddit’s upvote/downvote system – is a hopeful development not because of the veracity of its content but because it, at the very least, shows that there are human agents who drive the internet, rather than some unstoppable, robotic force of nature that we often vaguely call “the internet.”
So how is that crowdsourcing intersects so snugly with the prevalent narrative of a self-driven internet? How is that search engines (the clearest, most obvious metaphors to a wisdom-producing computer from, say, Star Trek, yet another debt that tech owes to imagination and the liberal arts) are now, in many cases, conduits for social networks and other crowdsourced news? I don’t think it’s odd at all, actually, since it confirms that the internet, as a source of knowledge or truth, is just as subjective and contingent on human inputs as anything else. I mean, let’s look at some of the major drivers of internet content:
-Google: uses proprietary algorithms and integration with proprietary social networks (most notably G+). Results system can be gamed or “bombed” to promote certain results. All of this despite its promotion of “openness.”
-Twitter: proprietary social network that suggests certain celebrities or popular users to follow, primarily because said persons are the best evangelists for Twitter itself (as a tool/service).
-Klout: dependent on mostly amateur “expertise” and opinion, as noted above by The Verge.
So Microsoft is hardly putting anyone or anything newly “under the influence” of amateurs. The entire internet is built around these types of subjectivity that inevitably result from human input and tinkering.
-The ScreenGrab Team