I’m not sure if I’m a “journalist.” I interview people, do research, and supply regularly updated sites with news stories. But I do it all from an apartment, with only the occasional trip to a minimalist office. Do these details make the “journalist” label unfit?
“Journalism” is a word often in company with “disruption,” one of the most overused and annoying terms to enter the vernacular. We “journalists” are framed as under siege from the Internet, unable to adjust to free Web browsers eating into newspaper revenue.
We’re in such bad shape, apparently, that we pretend disruption doesn’t exist, according to Ben Thompson, in his misinterpretation of Jill Lepore’s virtuosic New Yorker essay. Lepore was not discounting the idea of change, but providing a history of how mankind has explained it, finding issues with Clayton Christensen’s methodology as well as the historically limited range and shelf life of “disruption,” a concept that could only have arisen from the era of 9/11 and cheap Asian manufacturing.
“When all you have is a hammer everything etc.” – although since Silicon Valley is too digital for something as analog and working-class as a hammer, let’s say that when all you have is no humanistic background and strong affinity for the violent terminology of “disruption,” everything looks like it is in danger. If I a “journalist,” am I in trouble?
Again, I don’t know, especially since the label may not even be apt. Maybe I am a “journalist” who has simply evolved (before “disruption,” evolution was nearly as ubiquitous a term for explaining everything, as Lepore pointed out) to use new tools. Why not take this optimistic, even progressive (another preeminent etiology of yesteryear) view, rather than the insecure, cynical stance of “disruption”?
Journalists don’t necessarily serve the interests of the VCs and technical folk that have made ” disruption ” de rigeur (well, unless they work for TechCrunch). It shouldn’t be surprising that these writers, especially ones like Lepore who don’t toe the line, are construed as not getting “disruption” or, worse, being disrupted. Any piece of confirmation bias – declining ad revenue, the agony of paywalls – then suffices for proving “disruption.”
At the same time, how often do you hear of these professions being disrupted?:
-VC – investing is often guesswork, so why not automate it and hook it into something like IBM Watson?
-Programmer – a software engineer is more like a car mechanic than a doctor. Why not move toward less tinkering and customization (as has happened with cars like the Tesla Model S) and make such human involvement obsolete?
-CEO – how many CEOs have been outsourced to China or automated because of “disruptive” forces? I’m not talking about firing one person to hire another, but about eliminating an incredibly wasteful position that is compensated at a crazy ratio to the rest of the organization.
“Disruption,” it seems, is weirdly selective, with class-related and political biases (surprise). As a “journalist” or something close to it, rather than a VP of engineering, I’m not surprised that I’m an actor in other people’s dramas about “disruption” and its impact on everything.
Writers have to worry less about nonwriters’ opinions and naysaying – “evolve” and “progress,” don’t “disrupt” or “be disrupted” (whatever that means). I wrote this whole entry on the WordPress app for Android, but I see it as just another tool rather than the ancestor of a robot waiting to take my job.
There’s been a recent surge in attention given to a relatively obscure British journalist’s thoughts on headline writing. “Betteridge’s Law” is the informal term for the argument that any (usually technology-related) headline that ends in a question mark can be answered “no.” Betteridge made his original argument in response to a TechCrunch article entitled “Did Last.fm Just Hand Over User Listening Data to the RIAA?”
The reason that so many of this rhetorical questions can be answered “no” comes from their shared reliance on flimsy evidence and/or rumor. The TechCrunch piece in question ignited controversy and resulted in a slew of vehement denials from Last.fm, none of which TechCrunch was able to rebut with actual evidence. John Gruber also recently snagged a prime example in The Verge’s review of Fanhattan’s new set-top TV box, entitled “Fan TV revealed: is this the set-top box we’ve been waiting for?”
So we know what Betteridge’s Law cases look like in terms of their headlines, which feature overzealous rhetorical questions. But what sorts of stylistic traits unite the body of these articles? Moreover, why do journalists use this cheap trick (other than to garner page-views and lengthen their comments sections), and what types of arguments and rhetoric do they employ in following-up their question? I am guilty of writing a Betteridge headline in my own “Mailbox for Android: Will Anyone Care?,” which isn’t my strongest piece, so I’ll try to synthesize my own motivations in writing that article with trends I’ve noticed in another recent article that used a Betteridge headline, entitled “With Big Bucks Chasing Big Data, Will Consumers Get a Cut?”
Most visibly, Betteridge’s Law cases employ numerous hedges, qualifiers, and ill-defined terms, some of which are often denoted by italics or scare-marks. By their nature, they’re almost invariably concerned with the future, which explains the feigned confusion inherent in the question they pose. That is, they act unsure, but they have an argument (and maybe even a prediction to make). Nevertheless, they have to hedge on account of the future not having happened yet (the “predictions are hard, especially about the future” syndrome), or, similarly, use conditional statements.
I did this near the end of my Mailbox article, saying “This isn’t a critical problem yet, or at least for as long as Google makes quality apps and services that it doesn’t kill-off abruptly, but it will make life hard for the likes of Mailbox and Dropbox.” My “yet” is a hedge, and my “it will” is the prediction I’m trying to use to establish more credibility. In The Verge article linked to by Gruber, the authors say “IPTV — live television delivered over the internet — is in its infancy,” strengthen that with “Meanwhile, competition for the living room is as fierce as it has ever been,” and then feebly try to make sense of it all by saying “At the same time, if it matches the experience shown in today’s demos, Fan TV could win plenty of converts.”
Delving into the aformentioned article about “big data,” we find similarly representative text:
- “You probably won’t get rich, but it’s possible”
- “But there’s a long road ahead before that’s settled”
- “Others aren’t so sure a new market for personal data will catch on everywhere”
- “not as much is known about these consumers”
- “That’s a big change from the way things have worked so far in the Internet economy, particularly in the First World.”
- “big data”
This headline is really a grand slam for Betteridge’s Law. Simply answering “no” means that you believe that corporations specializing in data-collection won’t be all that generous in compensating their subjects for data that they’ve possibly given up without even realizing that they’ve done so. After all, lucid arguments have been made about how Google in particular could be subtly abetting authoritarianism via its data collection, which if true would constitute a reality directly opposed to the fairer, more democratic world proposed by advocates of data-related payments. To the latter point, Jaron Lanier has argued for “micropayments” to preserve both middle-class society and democracy in the West.
The article examines mostly nascent data-collection and technology companies and ideas whose success or failure is so far hard to quantify and whose prospects remain unclear. Accordingly, the author must use filler about the weak possibility of becoming rich, the cliché of a “long road ahead,” and the admission that many consumer habits are a black box and that maybe not all consumers are the same. Even the broad “consumers” term is flimsy, to say nothing of the nebulous term – “big data” – that the article must presuppose as well-defined (I have argued that it is not so well-defined) to even have a workable article premise.
For additional seasoning, the article resorts to the outmoded term “First World” (a leftover from the Cold War) and the ill-defined “Internet economy.” I think I know what he means with the latter: the targeted-ad model of Google, Amazon, and Facbook. But the vacuity of the term “internet” leaves the door open: would Apple’s sale of devices that require the internet for most functions count as part of the “internet economy,” too, despite having a different structure in which users pay with money rather than data?
Like many Betteridge-compliant headlines, the accompanying article isn’t a contribution to any sophisticated discussion of the issues that it pretends to care about. Hence the teaselike question-headline; Betteridge’s Law cases pretend that they’re engaging in high discourse, perhaps in the same way that the valley girl accent – riddled with unusual intonations cadences that throw off the rhythm of its speaker’s sentences and draws attention away from content – pretends it is partaking in real conversation. Perhaps we really should bring back the punctus percontativus so we can see these rhetorical questions for what they really are.