“Information” is a pretentious word. So are its kin, “data” and “data points” (???) If bad writing is about things that are not concrete, then info-data is its muse.
It’s a fancy word for “stuff,” in the end. Imagine the following slogans recast to show how trite info-data is:
-“the stuff age”
Some uses – “mobile data” – are more concrete. But look: it’s scary that a basic synonym probes the shallowness of info-data. It’s about air, about ideas that are festooned with flowery words like “solutions” and “digital” that themselves are blank, yet somehow add more character (“solution ” is at least evaluative; info-data is nothingness, less material even than “stuff” and its vivid homophones).
Why resort to info-data? Because computers and the industries around them lack a clear reason for existing.
The Internet is an outgrowth of the telegraph that has done as much bad (spying, fake social media personae, argument for no reason, stress over minor things like email) as good (new tools for writing, reading, and chatting).
Computers themselves are often justified as “productivity” tools, but “productivity” is a ritual, not a result. New jobs and issues are created to feed the hunger for “productivity,” but it can’t be sated.
Like the Internet, financial services, and 100-hour workweeks, computers keep recreating the need for productivity, rather than satisfying its requirements. We’re solving a problem that isn’t there – maybe that’s why “solutions” is meaningless and a crutch.
Info-data is even more generic and, well, insincere. Something like info-data has always existed for humans, but it has enjoyed a moment now that it is associated with smartphones and PCs. Are “analog” media like books repositories of info-data? Why didn’t the invention of the codex form kick off The Information Age?
Whereas books have clear boundaries and purposes – a novel for leisure reading; a textbook for education – info-data media do not. The Web has no purpose, and computers, while no generating info-data, are little more than extensions of analog tools for gaming and writing.
The info-data lingo makes computers and the Internet seem profound, like clear breaks with what came before. But this language is vague, and it reveals summering so ordinary that terms for the most ancient, mundane things – information, data – have to be put into service because there’s nothing else there.
There’s been a recent surge in attention given to a relatively obscure British journalist’s thoughts on headline writing. “Betteridge’s Law” is the informal term for the argument that any (usually technology-related) headline that ends in a question mark can be answered “no.” Betteridge made his original argument in response to a TechCrunch article entitled “Did Last.fm Just Hand Over User Listening Data to the RIAA?”
The reason that so many of this rhetorical questions can be answered “no” comes from their shared reliance on flimsy evidence and/or rumor. The TechCrunch piece in question ignited controversy and resulted in a slew of vehement denials from Last.fm, none of which TechCrunch was able to rebut with actual evidence. John Gruber also recently snagged a prime example in The Verge’s review of Fanhattan’s new set-top TV box, entitled “Fan TV revealed: is this the set-top box we’ve been waiting for?”
So we know what Betteridge’s Law cases look like in terms of their headlines, which feature overzealous rhetorical questions. But what sorts of stylistic traits unite the body of these articles? Moreover, why do journalists use this cheap trick (other than to garner page-views and lengthen their comments sections), and what types of arguments and rhetoric do they employ in following-up their question? I am guilty of writing a Betteridge headline in my own “Mailbox for Android: Will Anyone Care?,” which isn’t my strongest piece, so I’ll try to synthesize my own motivations in writing that article with trends I’ve noticed in another recent article that used a Betteridge headline, entitled “With Big Bucks Chasing Big Data, Will Consumers Get a Cut?”
Most visibly, Betteridge’s Law cases employ numerous hedges, qualifiers, and ill-defined terms, some of which are often denoted by italics or scare-marks. By their nature, they’re almost invariably concerned with the future, which explains the feigned confusion inherent in the question they pose. That is, they act unsure, but they have an argument (and maybe even a prediction to make). Nevertheless, they have to hedge on account of the future not having happened yet (the “predictions are hard, especially about the future” syndrome), or, similarly, use conditional statements.
I did this near the end of my Mailbox article, saying “This isn’t a critical problem yet, or at least for as long as Google makes quality apps and services that it doesn’t kill-off abruptly, but it will make life hard for the likes of Mailbox and Dropbox.” My “yet” is a hedge, and my “it will” is the prediction I’m trying to use to establish more credibility. In The Verge article linked to by Gruber, the authors say “IPTV — live television delivered over the internet — is in its infancy,” strengthen that with “Meanwhile, competition for the living room is as fierce as it has ever been,” and then feebly try to make sense of it all by saying “At the same time, if it matches the experience shown in today’s demos, Fan TV could win plenty of converts.”
Delving into the aformentioned article about “big data,” we find similarly representative text:
- “You probably won’t get rich, but it’s possible”
- “But there’s a long road ahead before that’s settled”
- “Others aren’t so sure a new market for personal data will catch on everywhere”
- “not as much is known about these consumers”
- “That’s a big change from the way things have worked so far in the Internet economy, particularly in the First World.”
- “big data”
This headline is really a grand slam for Betteridge’s Law. Simply answering “no” means that you believe that corporations specializing in data-collection won’t be all that generous in compensating their subjects for data that they’ve possibly given up without even realizing that they’ve done so. After all, lucid arguments have been made about how Google in particular could be subtly abetting authoritarianism via its data collection, which if true would constitute a reality directly opposed to the fairer, more democratic world proposed by advocates of data-related payments. To the latter point, Jaron Lanier has argued for “micropayments” to preserve both middle-class society and democracy in the West.
The article examines mostly nascent data-collection and technology companies and ideas whose success or failure is so far hard to quantify and whose prospects remain unclear. Accordingly, the author must use filler about the weak possibility of becoming rich, the cliché of a “long road ahead,” and the admission that many consumer habits are a black box and that maybe not all consumers are the same. Even the broad “consumers” term is flimsy, to say nothing of the nebulous term – “big data” – that the article must presuppose as well-defined (I have argued that it is not so well-defined) to even have a workable article premise.
For additional seasoning, the article resorts to the outmoded term “First World” (a leftover from the Cold War) and the ill-defined “Internet economy.” I think I know what he means with the latter: the targeted-ad model of Google, Amazon, and Facbook. But the vacuity of the term “internet” leaves the door open: would Apple’s sale of devices that require the internet for most functions count as part of the “internet economy,” too, despite having a different structure in which users pay with money rather than data?
Like many Betteridge-compliant headlines, the accompanying article isn’t a contribution to any sophisticated discussion of the issues that it pretends to care about. Hence the teaselike question-headline; Betteridge’s Law cases pretend that they’re engaging in high discourse, perhaps in the same way that the valley girl accent – riddled with unusual intonations cadences that throw off the rhythm of its speaker’s sentences and draws attention away from content – pretends it is partaking in real conversation. Perhaps we really should bring back the punctus percontativus so we can see these rhetorical questions for what they really are.