I don’t use Google Now anymore. It occasionally chirps up in my notification tray with a depressing White Sox score, but I barely use the swipe-up gesture to access its cards. The last time I did, it didn’t even give me transit info for the closest bus stop and still showed sports some old Blackhawks playoff scores that I hadn’t manually swiped away (1. The Blackhawks won the Stanley Cup over a week ago, and here’s a video from the parade; 2. That clear-out gesture is surprisingly hard to make). I initially loved the idea of a comprehensive think-ahead assistant that could pool together transit schedules, sports scores, and Gmail notices into one interface. It has seemingly improved since last year, now that it can show predicative news or music suggestions. But the price is that one has to go on using Google for everything – Google Search to scour the Web, Google Play Music to play both your own collection and stream other content, GMail to handle all email. And it’s becoming an increasingly unbearable price.
Apple blogger Marco Arment, with whom I don’t always agree (he’s dismissive of Android), had a great post up about how Google, along with peers Facebook and Twitter, were essentially killing the standards-based Web that had given life to them in the first place. Twitter doesn’t play nice with 3rd-party debs. Facebook has always been a walled garden. And Google, once a leader in standards compliance, nows wants everything behind the G+ wall: chat clients, video calling, photo backup, etc. I agree with Arment that Twitter in particular may have the theoretical high ground, since Twitter developers aren’t entitled to unfettered access to others’ proprietary services. But it, like Facebook and especially like Google, want to ultimately control what you see, i.e., ads and promotions.
Losing the standards-based Web would be tragic, but maybe not for the reasons that some cite. It would be painful to go on losing services like Google Reader or Falcon Pro (whose demise I recently chronicled), sure. Yet the real pain will come from large swathes of Web being the exclusive provinces of certain corporations who, for reasons either furtive or coercive, decide to give info to the American NSA. You’re social walled garden is also conveniently a surveillance state – it has natural tracking mechanisms and clear owners (by contrast, no one “owns” RSS or email) who can be talked into compliance. And of course, the rhetoric from both the array of walled gardens and from the NSA itself is all about making your worry less. Using Google Play Music apparently makes streaming music simpler (I never had a problem with Spotify, though), while the NSA’s collection of email is for the (truly outlandish) purpose of making you worry less about terrorism, something that kills fewer persons per year than bathtub falls do.
Google Now is really a microcosm for the time of cordoned-off surveillance made possible by the perfect convergence of the Web giants’ collective renewed focus on proprietary services and America’s obsession with surveilling (and being surveilled! many people of course have no issue with exposing all their info, they will even volunteer it, and because of them there’s a whole cottage industry of bullshit related to “no one cares about/should care about privacy, derp” out there). Are these suggested “research more” topics really going to enlighten me, or are they just going to take me to some SEO pile? Well, I don’t have to worry about that question anymore, at least practically (I’ll go on pondering it as philosophical issue), since I just use DuckDuckGo.
DuckDuckGo is a search engine and news service that has become an unlikely hero in the recent NSA revelations. It doesn’t track users and provides results that, at least in my heavy daily usage, seem to be as good as Google’s, if not better since fewer persons are out there trying to game them. It reminds me of using Firefox for the first time back in the dark days of WinXP/IE: a startling relief, a glass of ice water in hell. When you download the Android app, there’s no sign-in, no “we just need your email, pretty plz,” no “connect with Facebook/G+,” no “add all your friends and family as ___”. It just goes directly into a news feed with a search bar at the top. In one fell swoop, both Google Search and Google Now are strangely unessential on my Google-designed phone.
Of the three Web titans Arment mentions, Google by far has the most to lose in the potential anti-NSA/anti-tracking world that DuckDuckGo represents. No tracking and fewer ad impressions mean that Google’s business model – which most people don’t understand – just doesn’t work. And unlike Facebook or Twitter, Google has no unique service, with the possible exception of its sophisticated Maps: most of its services are fast-follow efforts or copies, with Google Drive (which combines MS Office with Dropbox) being the best example. You can take your email, your search queries, or your files and notes elsewhere; but you can’t necessarily take your Twitter followers or Facebook friends. Their walled gardens are simply better than G+. This is why Google needs to create Arment’s described “lockdown” effect via G+ in order to compete with Twitter et al, and it has to do this in spite of Apple’s efforts to clear Google off the iPhone (how long til we see Bing as the default search engine on the iPhone?). Good luck.
I agree with Arment’s conclusion, expressed as a retort to the proprietary lockdown efforts from leading Web companies: “[F]uck them, and fuck that.” It’ll take huge steps to stem the tide of them and of the surveillance (both by them and by government) that they enable, however. The recent Google reversal on retiring CalDAV in favor of the Google Calendar API represents one such small victory, and I hope that there are more. And switching to DuckDuckGo is one good, painless way to get back on the path to a saner, more private existence.
There’s been a recent surge in attention given to a relatively obscure British journalist’s thoughts on headline writing. “Betteridge’s Law” is the informal term for the argument that any (usually technology-related) headline that ends in a question mark can be answered “no.” Betteridge made his original argument in response to a TechCrunch article entitled “Did Last.fm Just Hand Over User Listening Data to the RIAA?”
The reason that so many of this rhetorical questions can be answered “no” comes from their shared reliance on flimsy evidence and/or rumor. The TechCrunch piece in question ignited controversy and resulted in a slew of vehement denials from Last.fm, none of which TechCrunch was able to rebut with actual evidence. John Gruber also recently snagged a prime example in The Verge’s review of Fanhattan’s new set-top TV box, entitled “Fan TV revealed: is this the set-top box we’ve been waiting for?”
So we know what Betteridge’s Law cases look like in terms of their headlines, which feature overzealous rhetorical questions. But what sorts of stylistic traits unite the body of these articles? Moreover, why do journalists use this cheap trick (other than to garner page-views and lengthen their comments sections), and what types of arguments and rhetoric do they employ in following-up their question? I am guilty of writing a Betteridge headline in my own “Mailbox for Android: Will Anyone Care?,” which isn’t my strongest piece, so I’ll try to synthesize my own motivations in writing that article with trends I’ve noticed in another recent article that used a Betteridge headline, entitled “With Big Bucks Chasing Big Data, Will Consumers Get a Cut?”
Most visibly, Betteridge’s Law cases employ numerous hedges, qualifiers, and ill-defined terms, some of which are often denoted by italics or scare-marks. By their nature, they’re almost invariably concerned with the future, which explains the feigned confusion inherent in the question they pose. That is, they act unsure, but they have an argument (and maybe even a prediction to make). Nevertheless, they have to hedge on account of the future not having happened yet (the “predictions are hard, especially about the future” syndrome), or, similarly, use conditional statements.
I did this near the end of my Mailbox article, saying “This isn’t a critical problem yet, or at least for as long as Google makes quality apps and services that it doesn’t kill-off abruptly, but it will make life hard for the likes of Mailbox and Dropbox.” My “yet” is a hedge, and my “it will” is the prediction I’m trying to use to establish more credibility. In The Verge article linked to by Gruber, the authors say “IPTV — live television delivered over the internet — is in its infancy,” strengthen that with “Meanwhile, competition for the living room is as fierce as it has ever been,” and then feebly try to make sense of it all by saying “At the same time, if it matches the experience shown in today’s demos, Fan TV could win plenty of converts.”
Delving into the aformentioned article about “big data,” we find similarly representative text:
- “You probably won’t get rich, but it’s possible”
- “But there’s a long road ahead before that’s settled”
- “Others aren’t so sure a new market for personal data will catch on everywhere”
- “not as much is known about these consumers”
- “That’s a big change from the way things have worked so far in the Internet economy, particularly in the First World.”
- “big data”
This headline is really a grand slam for Betteridge’s Law. Simply answering “no” means that you believe that corporations specializing in data-collection won’t be all that generous in compensating their subjects for data that they’ve possibly given up without even realizing that they’ve done so. After all, lucid arguments have been made about how Google in particular could be subtly abetting authoritarianism via its data collection, which if true would constitute a reality directly opposed to the fairer, more democratic world proposed by advocates of data-related payments. To the latter point, Jaron Lanier has argued for “micropayments” to preserve both middle-class society and democracy in the West.
The article examines mostly nascent data-collection and technology companies and ideas whose success or failure is so far hard to quantify and whose prospects remain unclear. Accordingly, the author must use filler about the weak possibility of becoming rich, the cliché of a “long road ahead,” and the admission that many consumer habits are a black box and that maybe not all consumers are the same. Even the broad “consumers” term is flimsy, to say nothing of the nebulous term – “big data” – that the article must presuppose as well-defined (I have argued that it is not so well-defined) to even have a workable article premise.
For additional seasoning, the article resorts to the outmoded term “First World” (a leftover from the Cold War) and the ill-defined “Internet economy.” I think I know what he means with the latter: the targeted-ad model of Google, Amazon, and Facbook. But the vacuity of the term “internet” leaves the door open: would Apple’s sale of devices that require the internet for most functions count as part of the “internet economy,” too, despite having a different structure in which users pay with money rather than data?
Like many Betteridge-compliant headlines, the accompanying article isn’t a contribution to any sophisticated discussion of the issues that it pretends to care about. Hence the teaselike question-headline; Betteridge’s Law cases pretend that they’re engaging in high discourse, perhaps in the same way that the valley girl accent – riddled with unusual intonations cadences that throw off the rhythm of its speaker’s sentences and draws attention away from content – pretends it is partaking in real conversation. Perhaps we really should bring back the punctus percontativus so we can see these rhetorical questions for what they really are.