Advertisements

Tag Archives: iPad

Apps and social media fatigue

If I were to graph the number of apps installed on any device I own since I got my first Android phone in the summer of 2011 (an HTC Inspire), it would be left-skewed. A combination of concerns about battery life and storage space, the realization that some websites offer better experiences than their respective apps (especially Facebook), and an overall desire to just have few sources of information has led to delete nearly everything but preloaded apps.

What’s left? Not much.

15856075344_6c6a095976_h

What’s more, I haven’t actively searched for a new app in a while. I’m not sure if this says more about me being burned out on data and notifications (they feel so distracting, and I know I have written/read less because of them) or about the maturity of the app market.

I remember when “apps” became part of the lexicon some time in the summer of 2008. I had just moved to Chicago and I still had a Motorola RAZR that might have been cutting-edge in 2005, during my first year of college. When I got online for the first time ever in my first Chicago apartment – via a Dell desktop PC – the App Store was only 2 months old and Google Chrome was less than a week old. On my PC, I didn’t really think of “apps” except for Web browsers and games, and even then I thought of them as “programs.”

In 2009, I had my first brushes apps like Shazam and Grindr that offered something a lot different than what had been available from a PC or Mac. In 2010, I learned about Instagram and was for the first time jealous of people who had iPhones (I still had a dumb phone of some sort at that time). In 2012, I found out about Uber and was briefly enamored with it before it revealed itself as an ethically-challenged organization.

But since then, there haven’t been many “a-ha” moments for me in using mobile apps. The ones I use every day are based on age-old phone conventions like being able to send text messages (starting with SMS and now evolving into iMessage, LINE, etc.) and photos.

There’s also DuckDuckGo (a search engine, one of the oldest forms of exploring the Web), Lyft (since I can’t stand Uber), Flickr (for photo backup) and Tumblr (where I do some of my creative writing). There are ways to pay for my coffee (Dunkin’ Donuts and Starbucks) and then there’s Yo, which is a novel way to get updates on RSS feeds, Twitter accounts, etc. Although it started a gimmick, I think Yo has a lot of potential. There’s Pocket, my favorite. And 1Password, which simplifies so many headaches.

Part of the reason for the paucity of apps on my phone is that I have never been in love with social networking. With Tumblr, I can just publish from time to time and not worry about my real identity. But I steer clear of Facebook and Twitter on mobile since they just demand too much attention for too little return. I use Snapchat but have never used Secret (I don’t get it) or any dating app like Tinder (I’m married).

What is the future of social networking? Bleak, I hope, since it seems to make so many people anxious or unhappy, worrying about what others are doing and keeping track of when certain people are awake or active. I liked this passage from Tyler Brule:

“I have a theory about social media: that is exists not because people are dying to share everything but because of poor urban planning. The reason these channels have developed on the U.S. west coast stems from millions of people being lonely and trapped in sprawling suburbs. Apparently, the Swiss are among the lowest users of social media in Europe. I’d venture that this is due to village life, good public transport and a sense of community.”

In America, for someone born after 1980, there are so many barriers to meeting up with others unless: 1) you have a car; 2) have access to good public transportation. #1 is an issue for the cash-challenged Millennial generation, yet so much of American infrastructure – from sprawling parking complexes to office parks located in the middle of nowhere – assumes the ownership of one. #2 is surprisingly rare – I would venture to say that one can only comfortably be out and about in a city without having a car as back-up in exactly two American cities: New York and Chicago.

What fills the void? Social media and messaging apps. Maybe part of my own gravitation away from social media has been the fact that I have lived in one of these two cities for the past 7 years. Plus, no longer being single has also eroded a lot of the youthful fascination that once made, say, Facebook so exciting to use. It’s hard for anyone who joined Facebook after roughly 2006 or 2007 to know what it was like in the early years, when it was all single college students who send each other Pokes and edited each other’s Walls at will.

Less social media (and storage space – I settled for a 16GB iPhone 6 Plus) has led to a pretty spartan, utilitarian home screen. But it’s also, I suspect, left me happier since I don’t have to keep tabs on others as part of a lonely suburban existence.

Advertisements

Finding a Happy Balance Between Mobile and Desktop

“Mobile-first” and “mobile-only” are almost clichés in terms of current app design. But overuse of “mobile” language aside, iOS and Android users have definitely benefitted from this new focus from developers on producing software that exploits and respects the unique capabilities of smaller devices. Maybe even too much so: I recently combed thru my app drawer and felt overwhelmed by the nearly 100 apps – most of them both beautifully designed and easy to use – in it. My first instinct was to simply cut myself off from many of the services provided by these apps, so as to simplify my experience and reduce app count. I initially thought about completely ditching RSS reading and some social networking, for example.

Ultimately, I opted to do something different and instead I redistributed my workflow across my mobile device (a Nexus 4), my Mac, and my wifi-enabled Wii U. Although many of the apps and services I was using had versions available for both Android and OS X (and Web), I decided to restrict many of them to only one of my devices and ignore its other versions. So for example, I kept the mobile Google+ Hangouts app, but eschewed its desktop Web/Gmail version, and likewise kept my desktop RSS reader (Reeder) while ditching my previous mobile RSS clients.

The most difficult, yet most rewarding, part of this process was determining which apps and services I could remove from my phone and perform only on my Mac or Wii U. Amid the swirl of “mobile-only”/”mobile-first lingo,” I reflexively felt that I was selling myself short by offloading many of the excellent apps and services I used onto my relatively old-fashioned Mac and my dainty Wii U, but the experience has been liberating. I have improved my phone’s battery life, reduced clutter in its launcher, and restored some piece of mind: there are fewer things to blankly stare at and anxiously check on my phone while on the train, at the very least.

More importantly, I know have a firmer sense of what I want each device, with its accompanying apps and services, to do. The productivity bump and happiness that I have experienced has also made me realize, finally, why Windows 8 has flopped. Trying to treat all devices the same and have them run the same apps is a recipe for poor user experience and too many duplicate services. It becomes more difficult to know what any given device excels at, or what a user should focus on when using it. If focus is truly saying no to a thousand things, then it’s important to say “no” to certain apps or services on certain platforms. Steve Jobs famously said “no” to Flash on iOS, but one doesn’t even have to be that wonky or technical when creating workflow boundaries and segmentation in his/her own life: I’ve said “no” to Web browsing on my Wii U and to Netflix on my phone, for example.

With this move toward device segmentation and focus in mind, I’ll finally delve into the tasks that I know do only on desktop or in the living room, so as to relieve some of the strain and overload from my mobile device. I perform these tasks using only my Mac or my Wii U, and I do not use their corresponding apps or services on my Nexus 4.

RSS Reading

RSS can be tricky: you probably shouldn’t subscribe to any frequently updated sites, since they will overwhelm your feed and leave you with a “1000+ unread” notification that makes combing thru the list a chore. Rather, sites that push out an update once or twice per day (or every other day) are ideal material for RSS. Rewarding RSS reading requires you to have specialized taste borne out of general desktop Web browsing (see below), as well as a tinkerer’s mindset for adding and subtracting feeds. It’s a workflow meant for a desktop.

Granted, there are some good RSS clients for Android: Press and Minimal Reader Pro spring to mind. However, neither is great at managing feeds due to their minimalism and current reliance on the soon-to-be-extinct Google Reader. Plus, I’ve yet to find an Android rival for Reeder, which I use on my Mac an which is also available for iOS. The time-shifted nature of RSS also makes it something that I often only get around to once I’m back home, not working, and sitting down, with Reeder in front of me, and so I forego using a mobile client most of the time. This may change if and when RSS undergoes its needed post-Google Reader facelift.

Web browsing

The thrill of wide-open desktop browsing doesn’t exist on mobile. Maybe it’s because most mobile sites are bastardizations of their desktop forbearers, or because screen size is a limiting factor. Moreover, most mobile apps are still much better and much faster than their equivalent websites. I haven’t disabled Chrome on my phone, but I seldom use it unless another app directed me there. Instead, I prefer news aggregators like Flipboard and Google Currents, or strong native apps like The Verge, Mokriya Craigslist, and Reddit is Fun Golden Platinum.

Facebook/LinkedIn/Tumblr

Of this trio, only Tumblr has a first-rate Android app in terms of aesthetics and friendliness to battery life. It’s easy for me to see why I don’t like using any of them on mobile: they all began as desktop websites, and then had to be downsized into standalone apps. Alongside these aesthetic and functional quibbles with website-to-app transitions, I also consciously limit my Facebook and LinkedIn intake by only checking them on desktop, and in the case of Tumblr, I may create content for it on my phone, but I usually save it to Google Drive (if only to back it up, which I’ll always end up doing one way or another) and then finish formatting and editing it on my desktop before posting it.

Twitter is a different story, due to its hyper-concise format. I’ll talk about it in the next entry. Google+ – which is almost completely ignorable as a standlone site on desktop – is also much better on mobile, wherein it performs useful background functions like photo backup. Mobile-first networks like Instagram and Vine are obvious exclusions as well.

Spotify

Spotify is a unique case. Its Android app is certainly functional, but unstable and not so good with search. It is difficult to get a fully populated list of returned search results, and in many cases you must hit the back button and re-key the search. Its Mac app is much better – the gobs of menus and lengthy lists are right at home on the desktop. For listening to music on my Nexus 4, I use Google Play Music, where I have a large, precisely categorized personal collection accessible via a clean UI, and the terrific holo-styled Pocket Casts, which I use to play weekly trance podcasts from Above & Beyond and Armin Van Buuren, among others.

Netflix

I’m not a huge fan of Netflix on tablets or large-screen phones. I do probably 99% of my Netflix viewing on an HDTV connected to my Wii U, with the remainder done on my Mac. I can see the appeal of viewing Netflix while lying in bed, so I don’t rule out its mobile possibilities altogether. However, most of the video watching I do on mobile is via YouTube, Google Play Movies, or my own movie collection as played by MX Player Pro.

Skype

Skype is a good desktop messenger and video calling service, but it may as well be DOA on Android, especially stock Android. Google+ Hangouts (the successor to Google Talk) is much simpler, since it requires only a Google+ account and has a dead-simple video calling/messaging interface. Plus, Skype for Android is unfortunately a battery-drainer, in my experience. That said, Skype’s shortcomings on mobile are balanced by its strengths on desktop: its native Mac app is still an appealing alternative to having to open up Google+ in Safari/Chrome.

In part two, I shall look at apps or services that I now do exclusively on mobile, as well as the select group of apps and services that I use on both desktop and mobile.

Should Google Make its Own Hardware?

Android and Me has a post up about the need for Google to build its own Nexus hardware. The argument goes: since the company’s complete control over the Chromebook Pixel, Nexus Q, and Google Glass resulted in outstanding products, the company should just go all-in on hardware.

I don’t think I agree. Of the three products cited, I would only really be proud of the Pixel, which, while expensive, has top-class features and could spearhead more disruption for the Windows PC market in particular. But body-wise, it’s still something that couldn’t have existed without the MacBook Pro as an antecedent, and its touchscreen, like the touchscreen in any Win8 ultrabook, suffers from odd performance but more broadly from a “what’s this good for?” syndrome, whereby touch is applied to ancient desktop metaphors rather than to touch-first/touch-only ones. The Nexus Q didn’t even make it to sale. And Google Glass? Well, I think it’s mostly hype, driven by a tech press that has yet to realize that categorical disruptions like the iPhone and the iPad and even the Android OS itself are the exception rather than the rule, and are usually organic and unpredictable rather than forced like Glass is. And then there’s the myriad privacy issues that Glass will only exacerbate.

Google’s current slew of Nexus hardware – the 4, 7, and 10 – are OEM products that are by and large fantastic. Perhaps they’re not ground-shaking innovations (although the Nexus 4 is arguably the first Android phone whose full experience is on par with the iPhone’s), but they’re beautiful and functional. So where does this desire for Google-branded Nexus hardware come from?

As much as it pains me to say it: Apple envy. But Google cannot easily be like Apple (this is not a normative statement, but a simple descriptive one). Apple makes its money in transparent, conventional ways: it sells products to end-users. For all of the bluster about Apple representing everything that’s closed and proprietary, Apple is straightforward when it comes to sales numbers, because that’s what Apple does: sell items to anyone would will buy them. Google, on the other hand, makes money in ways that most people on the street probably don’t understand, such as taking money from advertisers and promoters. Whereas Apple users have almost always directly paid Apple for their devices and services, someone could go about using most Google services without ever paying Google anything and instead paying hidden fees in the terms of opening themselves up to advertisers and data collection

Why does this difference matter? It means that, as currently constituted, hardware and integrated user experiences are not central to Google’s DNA, because Google doesn’t care that much about the end-user. The end-user is not Google’s customer; the advertiser is. This could change, sure. But I doubt it will change that soon, given that Google has gone all-in on making top-shelf iOS apps in order to monetize (via ads and data collection) what it must realize is the much more monetizable iOS user base. Google just wants its services (Maps, Gmail, YouTube, etc.) to be used by as many people on as many platforms as possible. Accordingly, it doesn’t have any existential drive or need to create a completely vertically integrated experience like Apple has done. Even when it has tried, such as with the Chromebook Pixel, the result is still a low-selling niche device whose capabilities likely won’t please the same broad range of persons who are sated by any iOS/OS X device.

The weak assumed sales numbers for the Nexus 10 in particular reinforce all of these points. Google is more than happy to use Chrome, or Maps, or Gmail to create trojan horses on other platforms so that it can keep its ad money flowing in, so why does it have to focus on device manufacturing, design, and sale? If it wanted to make real block-blusters that pushed the envelope for design and innovation, it would have to change its fundamental corporate DNA, and I just don’t see that happening for a while yet, if ever.

The tone-deafness of Glass and Sergey Brin’s justification for it are exhibit one in how far Google has to go on the hardware front. Or, just look at Microsoft: it, too, is struggling to get into the hardware business, because the Microsoft of late is a company that makes money not so much from selling to end-users as to businesses and OEMs. Since Apple cares almost exclusively about end-users, it still occupies a position in hardware that both Microsoft and Google will struggle to duplicate.

-The ScreenGrab Team

Why I Don’t Care About Google Glass

Short version: this photo

Long version: For a field that so lionizes technical chops and scientific knowledge, tech is oddly fascinated with fantasy. The geekery of Google’s Project Glass and its computer-on-face ethos is perhaps the most obvious evidence for this phenomenon, but one can grasp it nearly any time that someone references the technology from Star Trek or Star Wars (or Blade Runner, or a cyborg movie) as an aspirational endpoint, or describes something as “the future.”

By “the future,” commentators usually mean “a reality corresponding to some writer or creative artist’s widely disseminated vision,” which shows the odd poverty of their own imagination as well as the degree to which they often underestimate the power of creative artists/humanities types to drive technological evolution. But can human ingenuity really aspire to nothing more than the realization of a particular flight of fancy? Should we congratulate ourselves for bringing to life the technology from a reality that doesn’t exist?

Maybe. I think that viewing “technology” as the product not simply of a linear progression of machinery but also of contemporaneous creative artistic visions (which don’t necessarily follow a similarly linear path) can elucidate those aspects which make devices, software, and services appealing to people. Most individuals don’t know that much (and don’t care) about specifications, and in many cases likely cannot notice a huge difference between one product generation and another. But despite this general lack of hairsplitting over spec bumps and generation-to-generation changes, people do gravitate toward general product categories while shying away from others. iPad vs Surface, or Android vs BlackBerry, are some examples. In other words, people have good sense in differentiating categories, if not technical details.

But what do some of those more attractive categories have in common? For one, they were not totally obvious when they debuted. The iPad was based on almost no market research and resurrected a category – tablet PCs – which had been abandoned by other companies marching along on their own paths of “progress,” and which completed a circle back to nigh-ancient means of human interface design and input. Android made a wonky Linux-based cellphone OS successful during an era when most computing was still done thru closed-source Windows. And the iPhone? Well “[S]ometimes you see a new innovation and it so upsets the world’s expectations, it’s such a brilliant non sequitur, that you can’t imagine the events that must have lead to such an invention. You wonder what the story was,” is how one man put it.

On the other end of the spectrum, you have too-obvious devices like touch-enabled ultrabooks, the Surface line, and basically everything BlackBerry has released in the wake of the iPhone 3G shattering its reason to exist. They don’t fit into coherent categories, don’t do any single thing well, and only exist to loudly announce that they’re The Future, without doing the work necessary to qualify as such.

Google Glass is obvious. It hasn’t even been released yet and it already has its own mythology, about how it is driving (despite not being widely available) us into the era of “wearable computing” and, more importantly, stealing the mantle of innovation from Apple, who still prefers to do quaint things like wait until a product is finished and salable before thrusting it upon the public. Heads-up displays may someday be a viable product category, but this specific product – Google Glass – is going to be a flop.

Now, I’m obviously no Apple apologist, but the tech press has just gone nuts searching for any sign of weakness at Apple, such that they’re willing to drape Samsung’s specs-loaded, capable but boring phones with the mantle of “innovation” and, now, they’re eager to deem Glass the next phase in computing. It is one of the biggest beneficiaries of the anti-Apple wave, as well as a great litmus test of just how nuts said wave has become: “look! this unreleased product is already disrupting the iPhone!”

I agree with Guy English that wearable computing, for all the presumptive nods it gets in the tech media, is hardly a sure thing and possibly something that just won’t strike a chord with normals who don’t want to become cyborgs. As with the way-overblown demise of Google Reader, the tech press often forgets that it occupies a geeky echo-chamber fed by sites like The Verge and Reddit, in which reactions to things like the end-of-life of an RSS client and the impending release of a cyborg hat have much different currency and urgency than they do with the population at large. What I’m saying is: Google Glass is not a consumer product for average consumer.

It’s perfect for the geek loner/showoff. Accordingly, it has about the degree of decorum and respect for others’ privacy that one might expect from the CEO of the similarly “futuristic” product, Uber, or from Glass-happy Mark Zuckerberg, who will surely bring Facebook’s exhaustive, intrusive status updates to the device. Ok, ok: some point out that we used to be afraid of how cellphone cameras would end privacy and decorum, too. But most cellphones aren’t made by advertising companies who offer lots of “free” services in exchange for data collection, and who also make the 2nd-largest social network in the West. How easy will it be for a secretly captured Glass photo/film to “accidentally” make its way onto YouTube or G+?

Silver-lining: Google Glass, to the extent that anyone uses it, will team up with services like Facebook Home to accelerate social-network fatigue. There will be no escape from carrying your friends list and wall posts everywhere, to the extent that reality itself may end up as a sadder place. The current attitude, often described as solutionism, sees Google Glass a way to “fix” apparent issues like smartphones apparently not being immersive enough (you can turn them off and put them in your pocket very easily, after all, and it’s obvious when you are/are not paying attention to bystanders while using one). It even seems to attempt “fixing” the issue of paying for stuff – Glass doesn’t even allow app makers to charge for their Glass services, or serve any ads.

Google Glass (the specific product/preview) isn’t “the future.” It’s just the best evidence yet of Google’s insistence on force-feeding the world questionable solutions to “problems,” like privacy and smartphones, which aren’t real problems for anyone except for iteration-/sci-fi-minded executives. If someone says something is “the future,” don’t take his word for it – after all, age-old inventions like silverware, shoes, and restaurants (to quote some of Nassim Taleb’s favorite examples) have outlasted literally thousands of years of disruption, and even CDs are still going strong. What we see as “progress” is often nothing of the sort, and Google Glass is a good reminder of that.

What is Big Data?

Data

Big Data: what is it?

What is “big data?” Good question. Its name suggests that it describes a large pile of something, collected and organized by a company: numbers, autocorrect mistakes, search queries, anything

More practically, Big Data is often the tagline for aggregative software services that do things like predict fluctuations in airline ticket prices, or track video-viewing habits on Netflix. It collects and stores all of this data for retrieval later, and then uses it to try and predict outcomes. Accordingly, phenomena like House of Cards (based on painstaking research of Netflix habits), the 2008 Wall Street meltdown, and the installation of (mostly unmonitored) video cameras in seemingly last corner of Chicago are good examples of Big Data at work. What’s so great about any of that? To be fair, Google and especially Facebook can be regarded as leading Big Data collectors, too, but in both cases, the benefits they’ve provided are often matched by the privacy infringements, security concerns, and general Internet fatigue that both of those “free” services can cause.

The next time that some TED speaker, Amazon-bestselling author, or columnist tells that we are living in a uniquely disruptive and transformative era and that (this time, anyway) Big Data is the reason why, your should be skeptical. Big Data, as understood in the tech media, is basically a way to collect data, infringe privacy, and, in return, provide services (often “free” – be wary of anything that’s “free,” because it usually has a hidden price in the data it collects).  Its Bigness is a byproduct of higher network speeds and cheaper, easier cloud storage. Other than size, its data collection targets (what we do, watch, buy, sell, etc) are old-hat, nothing that would shock even the the Attic Greeks, who kept their own meticulous manual measurements (Small Data?) of diet and exercise regimens. There’s nothing new out there.

But Big Data is a Big Deal because it has no drawbacks for the parties that promote it. As Anthony Nyström pointed out recently, the idea of Big Data is so nebulous that even if it fails to deliver, then the speakers and evangelists who have sold tons of books and speeches on its account can simply say that the “data is bad” or that it’s your problem. This is what happens when people are allowed to get away with generalities and not pressed to be more concrete in their assertions. But it also highlights how flimsy the notion of “data” is, anyway. “Data-driven” and “the data” are terms that have become almost sacrosanct in the United States in particular. Elon Musk’s recent spat with the NYT over its “fake” review of the Tesla S is a good case in point. The reviewer-driver, Musk asserted, was simply lying when he said that the car had an unreliable battery that couldn’t hold charge in cold weather, and “the data” that Musk’s company had collected from the car would shatter the reviewer’s soft nonsense. No such thing happened. If anything, Musk’s torrent of data only inflamed the he-said/he-said debate.

Look: data is not some god or force of nature. It’s manmade, and handled by humans who have to then make sense of it. If you have a bad analyst, or too much data, then the entire operation can be compromised. Would Apple have been better off collecting more data about tablets before it made the iPad, rather than simply following Steve Jobs’ gut assertion that users needed to be guided in what they wanted? It’s debatable whether more data even leads to better decisions. And even in cases where the amount of data isn’t an issue, its quality can become one, too, even if it seems like good data on the surface. Data about lower crime rates in certain neighborhoods could lead one to think that crime wasn’t an issue there, despite having the obvious blindspot that many crimes go unreported and as such are not part of “the data.”

But that can be fixed, you might say – we just need better surveillance and better tools to give us better data. More technological progress (I disagree with the entire notion of “progress,” but I’ll let that slide for another time) you might say. OK: but at what cost? The same sort of nonsensical, overexcited language that drives a lot of the press about Big Data also drives the posts of many tech bloggers who advocate for rollbacks on privacy or any notion of any unconnected world. Jeff Jarvis thinks you shouldn’t be worried about losing your privacy, since publicness makes our lives better. Nick Bilton just can’t stand it that electronic devices can’t be used during airplane takeoff, as if those few moments of not being able to refresh Gmail or Facebook were critical to the betterment of humanity.

In these cases, as with the debate about Big Data and all of its privacy entanglements, it’s not so much the content of the assertion as it is the attitude with which it is made. It rings of “I know best” and has little regard for niceties like privacy and offline existence in particular. Don’t want to be part of “the data” made by Big Data and its tools? Too bad, that aforementioned attitude would say. What’s worse, the price of this “progress” toward more data and bigger data is often hidden because so many of Big Data’s tools are “free.” To be fair, paid services like Netflix are also part of the overall Big Data dredge. But general consumer awareness of how and why their data is being collected, whether by a free or paid service, appears to be low, and that’s too bad.

Slate has already worried that Big Data could be the end of creativity. I disagree, but I’m glad to see at least some pushback on the Big Data train – it isn’t clear that Big Data, despite all of its pretenses, is giving, or can give, us what we really want or need. Big Data, I think, assumes a certain linearity in how humans operate – that we show a machine, by way of what we click or like or +1, what we truly want, and that that input can be transformed into a high-quality output, like a certain type of content. I admit to making some data-based posts myself, but if I were to make this entire blog a slave to the data it collects, it would probably look like a super-geeky version of BuzzFeed, which, while fun for a while, would preclude some of the longer or more detailed posts that provide variety and often are surprise hits (at least from my modest perspective). So I’m sticking with just a modest, consciously restrained dose of data for now, something I think that those aforementioned Greeks would approve of.