“Technology” is a strange word. Its Greek root, techné, means “art” or “excellence,” and its usage in English is scarce until at least the 20th century. Its rise in popular discourse during the second Industrial Revolution, the movement that produced inventions such as the phonograph, makes sense. However, what’s usually glossed over is that “technology,” as a word, is filler, distracting us from the the reshaping of society from above.
What does it even mean to say that “technology changed everything” or to assign so much agency to vague, well, technological concepts such as “big data” or “the Internet of Things?” The vast discourse on technology is the best possible example of what Georg Lukacs called “reification,” the act of instilling human activities with the characteristics of things, creating what Lukacs himself called “a ‘phantom-objectivity,’ an autonomy that seems so strictly rational and all-embracing as to conceal every trace of its fundamental nature: the relationship between people.”
When I see “technology” in a sentence, I move pretty quickly past it and don’t think much about it. If I do, though, it’s like I rounded a corner and saw a forked roads leading into three turnabouts – the generality is crushing. Are we talking strictly about the actions of hardware, software, and networks? Are these actions autonomous? What if we just assigned all of these machinations to the category of “machinery and artisanal crafts” and spoke of the great, world-changing, liberating power of “powerful industrial machinery”? It doesn’t have the same ring to it, does it?
Words and classes
The history of words to talk about all of the basic concepts that undergird “tech writing” – the category that would seemingly include everyone from TechCrunch to PC World to Daring Fireball to this blog – is the history of taking words that belonged to the blue-collar working classes and reassigning them to the white-collar management classes. Take “software,” for instance. It derives from “hardware,” which once referred primarily to small metal goods. As early as the 18th century, one could talk about a “hardware store” as a place to buy metals.
Something similar, on a much broader scale, has gone on with the term “Internet.” As I explained in my entry on “Space Quest 6: The Spinal Frontier,” the entire discourse about “the Internet” is a retroactive reorganization of many separate traditions, spanning hardware, software, and networking, that once went by disparate names. Even the act of using “the Internet” was once similarly variable: it could be called “going into cyberspace” or “using virtual reality” well through the 1990s. Grouping everything under the banner of the “Internet” has had the desired effect of making changes affecting fields as diverse as education (via online learning) and transportation (via services like Lyft and Uber) seem inevitable.
It is reification writ large, as tight origin story compiled after the fact to create that very “phantom-objectivity” that Lukacs talked about. Likewise, “technology” itself, as a word, is a mini history on how mundane physical activities – building computers, setting up assembly lines – were reimagined to be on par with the high arts of antiquity. Leo Marx wrote, in his paper “Technology: The Emergence of a Hazardous Concept”:
“Whereas the term mechanic (or industrial, or practical) arts calls to mind men with soiled hands tinkering at workbenches, technology conjures clean, well-educated, white male technicians in control booths watching dials, instrument panels, or computer monitors. Whereas the mechanic arts belong to the mundane world of work, physicality, and practicality – of humdrum handicrafts and artisanal skills – technology belongs on the higher social and intellectual plane of book learning, scientific research, and the university. This dispassionate word, with its synthetic patina, its lack of a physical or sensory referent, its aura of sanitized, bloodless – indeed, disembodied – cerebration and precision, has eased the induction of what had been the mechanic arts – now practiced by engineers – into the precincts of the finer arts and higher learning.”
Making it, writing it
I love this passage since it captures so much of how the the rise of technology firms has been about word games and the institution of engineers and venture capitalists as, crucially, creators, and heirs to the traditions of straight male-dominated industry. Debbie Chacra did a great job out outlining the real shape of the Maker movement in a piece for “The Atlantic,” arguing that “artifiacts” – anything physical that could be sold for gain or accrue some sort of monetary value, seemingly on its own – were more important than people in today’s economic systems, especially people who performed traditionally female tasks like educating or caregiving.
Tech writing, vague as it is, exists in this uncomfortable context in which anything not associated with coding or anything “technical” is deemed less important – to businesses, to shareholders, to whomever is important for now but may be forgotten tomorrow – that what is more easily viewed (I mean this literally) as work that came from a predictable process (software from coding is the best example). Writers in this field have to continually prop up a huge concept – technology – that carries the baggage of decades of trying to be elevated to the status of fine arts like….good writing.
Talking about the agency of concepts is common, and tech writers – or anyone dabbling in writing about technology – have to play so many ridiculous games to cater to readers who long ago became lost in the reification of “technology” as an unstoppable force. Take this sentence, which I recently found via Justin Singer’s Tumblr:
“Big Dating unbundles monogamy and sex. It offers to maximize episodes of intimacy while minimizing the risk of rejection or FOMO [fear of missing out].”
Bleh. This passage is easy to make fun of, but its structure is so indicative of tech writing at large. There’s the capitalized concept (“Big Dating”) that is acting, via a buzzwordy verb (“unbundling” – what was the “bundle” in the first place? but “disrupt” is still the all-time champion in this vein) on The World As A Whole. Then there’s the shareholder language (“maximize”/”minimize”/”risk”) that speaks to the neoliberal economic ideas – most of them questionable – that have been the intellectual lifeblood of the tech industry as well as the governments that feebly regulate it (the weakening of political will is one reason Marx saw technology as a “hazardous” concept).
Aristotle and wrap-up
When I dipped my toes into Aristotle’s “On Interpretation” earlier, I talked about how he defined nouns as “sounds.” I then wondered if so much bad writing was the result of trying to write things that would sound absurd in speech (i.e., as sounds).
Tech writing in particular has this sort of not-real quality to it that makes it sound so silly when read aloud. It’s always trying to reify and create vast, unstoppable forces that aren’t even physically perceptible. Writing about “the Internet of Things” or “Big Dating” is to basically dress up everyday and unnovel concepts like networked devices and dating services in dramatic language.
You may as well have someone try to describe an sandstorm or flood to you as it were the result of a phantom-objective, all-powerful godlike force. Wait, that’s, like, 99 percent of religion right there. Well, when writing about “technology,” you’re always writing someone else’s scriptures, with all the opacity and word-gaming that that entails – who wants to read most of that?
I’m not sure if I’m a “journalist.” I interview people, do research, and supply regularly updated sites with news stories. But I do it all from an apartment, with only the occasional trip to a minimalist office. Do these details make the “journalist” label unfit?
“Journalism” is a word often in company with “disruption,” one of the most overused and annoying terms to enter the vernacular. We “journalists” are framed as under siege from the Internet, unable to adjust to free Web browsers eating into newspaper revenue.
We’re in such bad shape, apparently, that we pretend disruption doesn’t exist, according to Ben Thompson, in his misinterpretation of Jill Lepore’s virtuosic New Yorker essay. Lepore was not discounting the idea of change, but providing a history of how mankind has explained it, finding issues with Clayton Christensen’s methodology as well as the historically limited range and shelf life of “disruption,” a concept that could only have arisen from the era of 9/11 and cheap Asian manufacturing.
“When all you have is a hammer everything etc.” – although since Silicon Valley is too digital for something as analog and working-class as a hammer, let’s say that when all you have is no humanistic background and strong affinity for the violent terminology of “disruption,” everything looks like it is in danger. If I a “journalist,” am I in trouble?
Again, I don’t know, especially since the label may not even be apt. Maybe I am a “journalist” who has simply evolved (before “disruption,” evolution was nearly as ubiquitous a term for explaining everything, as Lepore pointed out) to use new tools. Why not take this optimistic, even progressive (another preeminent etiology of yesteryear) view, rather than the insecure, cynical stance of “disruption”?
Journalists don’t necessarily serve the interests of the VCs and technical folk that have made ” disruption ” de rigeur (well, unless they work for TechCrunch). It shouldn’t be surprising that these writers, especially ones like Lepore who don’t toe the line, are construed as not getting “disruption” or, worse, being disrupted. Any piece of confirmation bias – declining ad revenue, the agony of paywalls – then suffices for proving “disruption.”
At the same time, how often do you hear of these professions being disrupted?:
-VC – investing is often guesswork, so why not automate it and hook it into something like IBM Watson?
-Programmer – a software engineer is more like a car mechanic than a doctor. Why not move toward less tinkering and customization (as has happened with cars like the Tesla Model S) and make such human involvement obsolete?
-CEO – how many CEOs have been outsourced to China or automated because of “disruptive” forces? I’m not talking about firing one person to hire another, but about eliminating an incredibly wasteful position that is compensated at a crazy ratio to the rest of the organization.
“Disruption,” it seems, is weirdly selective, with class-related and political biases (surprise). As a “journalist” or something close to it, rather than a VP of engineering, I’m not surprised that I’m an actor in other people’s dramas about “disruption” and its impact on everything.
Writers have to worry less about nonwriters’ opinions and naysaying – “evolve” and “progress,” don’t “disrupt” or “be disrupted” (whatever that means). I wrote this whole entry on the WordPress app for Android, but I see it as just another tool rather than the ancestor of a robot waiting to take my job.
In the city I live in, Chicago, the owners of the historic Congress Theater came to an agreement with the city banning EDM from the venue. All acts that play there must now use “traditional instruments” during their shows.
Like genre skeptics of the past who have questioned the value of unfamiliar music and derided its creators as unauthentic charlatans, Chicago’s powers that be have provided an opportunity to think about authenticity in music. Why do critic resort to strong language about reality itself – “real,” “true,” “only” – when discussing low-stakes topics such as whether Deadmau5 is a working-class DJ or if a heavy metal is allowed to use synthesizers?
It’s like the 2000 U.S. presidential election all over again – are musicians persons with whom listeners would enjoy having a beer, yet, at the same time, do these celebrities exude sufficient serious to be accepted into The Canon (if such a thing even exists in EDM; it’s sort of a rockist construct). Since music criticism is so indeterminate, the only methodology for vetting ascendant musical acts is to wrack their music for tell-tale signs of a laborious creative process (hence, “traditional instruments) or relation to a specific social class (Born in the U.S.A. and Parklife are good examples from the rock album annals).
This critical approach toward everything from jazz to EDM has nudged artists to prove their worth – and their down-home (read: white and probably rural) – temperaments. Even synth-pop bands have proclaimed that they won’t succumb to the infinite DIY possibilities afforded by iOS music apps and instead soldier on with real synthesizers. Likewise the unexplainable influence of Mumford & Sons even made folksiness an important litmus test even for Group Therapy-grade acts for a while there. Above & Beyond themselves did acoustic shows last year and released an acoustic artist albums this year.
Genres and Society
Genres aren’t static, but their paths are carved not only by shifts in consumer style and taste, but also by social and demographic change. Jazz was incubated during the urbanized, prosperous 1920s in America, while rock and roll became the logical musical extension of 1950s urban sprawl, as the sound of America’s white population expropriating and exporting blues and jazz, which had previously been the specialties only of the country’s extreme rural and urban poles, to the suburbs.
Just as societal change can easily incite refuge to defensive terms such as “real” and “traditional” to bemoan the loss of an ideal that may have never existed, musical evolution brings out from the woodwork the authenticity scolds who decry new stars for, at best, violating good taste and, at worst, endangering everyone’s sanity and livelihoods. The Atlantic had an excellent piece on the rise of EDM (electronic dance music) as the new rock n’ roll, and in doing so, it nicely summarized the dark critical history of new genres being born (emphasis mine):
“The most obvious point of comparison…is how this new movement has been received by the majority of people who consider themselves possessed of good taste. In the 1920s, jazz was preached against from pulpits and editorial pages as the devil’s music, its crazy rhythms jangling the nerves, speeding the degeneracy of American civilization, and responsible in part for the ongoing failure of the temperance movement. In the 1950s, rock and roll was sneered at as jungle music, provoking lascivious displays unfit for the Ed Sullivan Show as well as responsible for juvenile delinquency and reefer madness. In the 1980s and ’90s, rap music was censured as violent thuggery, non-music…[B]ut most of the current non- parental criticisms of EDM are made in purely aesthetic or culturally derogatory terms: Dismissive, class-based coinages…are employed to wall off “real” electronic music as the preserve of the specialists.”
Perhaps one should pause to note the surreality of wide-bore, public discussions of “realness” within electronica, since electronica itself was once pilloried, or at least dismissed, by artists and critics alike as something too mechanical, fake, and European to be acceptable. Up until the release of their block-bluster The Game (1980), Queen emblazoned each of their 1970s LPs with the a disclaimer that no synthesizers had been used on the record. The White Stripes reprised this school of thought in the liner notes to Elephant (2003), which shouted, to no one in particular, that no “computers” had been used to make the record.
Computerized and Real Music
“Computer” really is the key term here, more so even than “synthesizer” or any more specific descriptor. Early electronica, especially the West German variety of Kraftwerk and Klaus Schulze and the American creations of Silver Apples and Cromagnon, announced itself by its reliance on obviously strange – non “traditional,” certainly – instrumentation that gave proceedings a computerized, alien sound, whether synths were in play or not. Sometimes the entire arrangement, rather than the individual sounds of a synth, made all the difference in distinguishing a song or album from pre-electronic music. For example, on Autobahn (1974), Kraftwerk juxtaposed traditional violins and guitars with samples car sounds and synths to demonstrate the possibilities inherent in new instruments and methodologies. Only a few years later, however, Kraftwerk had gone completely computerized on Radio-Activity (1975), and then issued an entire concept album that ruminated on the computer’s use cases in government, mathematics, and music itself on Computer World (1981), right on the eve of the widespread adoption of digital recording and playback technology that attended the CD format’s birth in 1982.
From The Man-Machine (1978) onward, Kraftwerk also adopted the mannerisms of robots, seemingly forced into their new mechanized existence by the growing centrality of computerized and automated processes in music creation. What had begun as the usage of a simple synthesizer had progressed into the usage of loops, drum machines, and more sophisticated recording techniques. It became hard to know where the human input (initially assumed to be composition and performance) ended and computer input (likewise assumed to be a means of enhancement and refinement) began. It was no coincidence that Kraftwerk waited until 2008 to issue a definitive remaster of their entire catalogue, as Ralf Hütter in particular became obsessed with getting the sound just right in light of newly available digital editing and production tools.
More so than any other outfit, Kraftwerk embodied how the issue of realness affects musical pioneers. Their posturing as robots was an ironic take on the conundrum that electronic musicians face in the face of both authenticity-obsessed critics and the persistent, decades-long dominance of rock and roll and indie rock within the music press. The fixation of publications such as Rolling Stone with lists of the greatest singers and guitarists, along with the enormous critical reputation afforded to indie musicians, keeps alive the question of how much realness factors into aesthetic evaluation. It appears that process in particular – the steps by which the music was created, and how discernible said process is to the listener – is a prime determinant of realness. When in doubt, we can consult Urban Dictionary (bolded emphasis mine) on this issue:
“real music includes anything that goes through what is called a pure process towards becoming music that sounds nice and does not bore the listner [sic] involves singing and not rapping. Usually involves: guitar, bass, drum.”
Via sarcasm, Urban Dictionary summarizes 60 years of rock criticism. It excavates the fading cultural currency of rock music by pinging its most basic and obvious traits – the guitar-bass-drums trio setup – and invests them with the unique power to produce “real” music, a label that early 1950s critics might have reserved exclusively for less guitar-based music, like jazz.
Books, EDM and Realness
Similar struggles for a definition of “the real” exist in other cultural fields, such as in the case of Jonathan Franzen complaining that ebooks don’t have the same permanence as the written word. There one finds characteristic appeals to soft classism (“real readers”) and authenticity (“literature-crazed). This broad struggle over realness in culture extends to EDM, which is currently the most prominent form of electronic music, and accordingly it is fertile ground for producers in heavy-rotation pop and hip-hop who are seeking to cross-pollinate their tracks with club flair. This piece, however, focuses more on how the authenticity debate affects EDM disc jockeys (DJs), who are the main EDM performers and composers. The DJ abbreviation itself is accidentally telling: it has nearly truncated the musicians’ ties to real physical discs and become a word in its own right, even if many DJs do go on using real discs (usually vinyl LPs) and their corresponding playback equipment, rather than a completely digital setup.
EDM is a conveniently broad umbrella under which to shelter the diverse genres of house, trance, techno, acid, dubstep, and what used to be dismissively called IDM (intelligent dance music). House music arouse in late 1980s Chicago, while trance was at least initially a much more European phenomenon, coming to the fore in the early 1990s with The Age of Love’s titular masterpiece. The late 1980s and early 1990s were a time of rapid transition in how music was recorded. Although editing software ProTools had not yet become mainstream, the music-making process was becoming increasingly automated, with hip-hop as the most brazen exponent of music that could float across a sea of carefully curated samples. Whether the samples were the hyper-specific record collection allusions of the Beastie Boys’ Paul’s Boutique (1989), or instead the vaguer synth-bass-drums issuances of house, making an album became as much about one’s abilities to curate an aural collage – and make as apparent as possible one’s diverse yet classical tastes – as about one’s abilities to perform with the human verve and virtuosity associated with jazz, classical, and rock; the idea of a “solo” doesn’t really exist in EDM.
Accordingly, the aesthetic critic would not be raising the critical stakes by criticizing the pitch of a house diva or other EDM vocalist, or by bemoaning the technical repetitiveness of a jam. The latter term is imprecise, but it may suffice if only to construe EDM as a hipper, more urban update on the rock jam, that is, a long-form construction (most EDM albums would qualify as “double albums” in the rock sense) that evolves in often subtle ways and which aims to capture, comment on, and finally re-imagine a highly specific setting, whether Ibiza or the Renaissance UK club. Terre Thaemlitz has stated that house music is “hyperspecific” and meant to convey a particular kind of post-1980s angst. Since EDM in this classical sense is super-local, like politics, then the onus for accurate reproduction and commentary falls on the DJ, whose mixing skills are arguably of no use if he doesn’t have an authentic relation with a particular location and audience. Being a DJ is really like being a politician or a real estate agent.
DJs: Just like Politicians
Like politicians, DJs have come under increasing pressure in the last decade to present themselves as authentic, “real” persons who talk, tweet, and perform just like their fans. The Verge once commented on the celebrity of the Canadian DJ Deadmau5 (who is the at the center of the current storm about DJ authenticity; emphasis mine):
“As a human, Joel Zimmerman epitomizes the “celebs: they’re just like us!” ethos. Fans are treated to rambling, very-unedited, “lol” and emoticon-laced posts on Facebook and Twitter. His face is an angular vessel of pure emotion, nearly always dominated by an ear-to-ear grin that communicates just as much as the words that come out of it, another testament to context bringing more to the table than words. His body, a lanky vessel clad in the t-shirts, baggy pants, and ballcaps of the masses, is covered in nerdy tattoos (Space Invader, Zelda hearts, Cthulhu, Mario “Boo ghost); he needn’t do more than walk into a room to tell you what his deal is. But when he transforms into deadmau5, his presentation is stripped of nearly all words.”
So Deadmau5 is someone to whom his fans can relate. The Verge even goes on to characterize him as a latter-day arena rocker, one who has replaced guitar pyrotechnics and animalistic rock star rituals with blinking lights and repetition. Even in a non-critical assessment of Deadmau5, the issue is framed within the context of rock music.
In light of these portrayals of Deadmau5′s performative style, it becomes easy to see him as the hipster or unusually tech savvy guy DJing a fraternity party or rave. While he certainly imports the obtuse cinematic sweep and costuming of Daft Punk, as part of a tradition harking back to Kraftwerk’s own aforementioned transformation, his wordlessly curated sets nevertheless have an earthy, populist air that nicely coincides with the DIY stylings of his album titles. The populism – the carefully crafted facade of “realness” – succeeds in part because of how Deadmau5 obscures his source material, although it is worthing noting that his protege, Skrillex, courts the authenticity wonks by appealing to older, mostly critically unassailable genres like reggae, in the same way that drum n’ bass once leaned critically on jazz and ragga. The New York Times described his technique as reductionist – many of the familiar parts of dance music (can we call it “classic dance” or “traditional dance” now?) are stripped away to highlight a few flashy traits, sort of like a guitar solo cutting through the blues and jazz changes of early rock but never completely obscuring the reputable source material.
Deadmau5 makes EDM that is agnostic of any particular demographic, a strategy which would seem to run into trouble if the previous argument about house’s hyperspecific contextualism is accurate. But the opportunity to predictably decry Deadmau5 as “not a real” DJ did not fully present itself until he said that most DJs show up to their concerts and, amid the booming noises and lights, simply press play. He likened EDM (by name) to a “cruise ship” meant to convey atmosphere for fans and celebrity bandwagoners alike, which, while partially an astute observation in its probing of it the genre’s roots in partylike locales like smoky clubs or laser-emblazoned dance floors, was nevertheless surprisingly brutal, even savage, in its assessment of an increasingly intellectualized, gentrified genre and its auteurs. The backlash was swift, with David Guetta in particular hitting back at Deadmau5, while other parts of the DJ community took the opportunity to point out that the instruments and live processes available simply were not up to snuff for recreating the complex introverted processes of in-studio EDM production.
Automation and Labor
To the latter point, the invention of newer, more efficient instruments has allowed for entire genres to develop, mature, and be performed throughout history. The piano’s improvements upon the harpsichord is a particularly significant case-study. Perhaps EDM’s DJs have indeed not yet succeeded in discovering easily reproduced ways to create studio-quality live performances. But even if they had, would it have changed the tribalism and infighting over “realness” in EDM? There were plenty of criticisms of Deadmau5 that cited the “hardworking” ordinary DJs (not unlike a political ad, really) who, unlike Deadmau5, specialized in live improvisation, singing or other real and true-to-life processes that demonstrate a tangible, almost bodily link between the performer and the music being performed. This is one of the more strident examples of one subgroup’s idea of “process” dictating for everyone what does and doesn’t count as “real,” and unsurprisingly, Deadmau5 himself has characterized studio recordings as “what counts.”
In EDM, musicians may well have reached a level of automation and in-studio complexity that is difficult to reproduce live, but this conundrum is a distraction, a too-convenient frame in which to confine the more nebulous issue of how “realness” is redefined and achieved by different classes. EDM today is a strange comparison to rock music in 1966-7, when The Beatles retired from touring altogether to focus on studio experimentation that would have had been both laborious to reproduce and unpalatable. This tack led to works (now) regarded as classics, like Sgt. Pepper’s Lonely Hearts Club Band, but it is equally notable in how it shirked populism and visible, transparent process (like the live-playing of instruments on stage) for opaque in-studio control.
Contemporary DJing, and EDM at-large, remains strongly invested in placating crowds and creating atmosphere in that pre-Sgt. Pepper way, but they achieve this populism via automation rather than human labor, hence the aforementioned “just press play” sets. To appreciate the different tacks that rock and EDM have taken, simply recall the comparison in The Verge of Deadmau5 to arena rockers. In the 1970s, prominent arena rockers Electric Light Orchestra, known for the complexity of their studio works, were beset by accusations of lip-syncing and usage of prerecorded tracks. In the 1970s, did this faux-pas make ELO any less “real” that synthesizer disavowers like Queen?
The Verge characterizes Deadmau5 as someone who was ordinary and just like his fans, a portrait at odd with his metapersonality as a purveyor of prerecorded tracks. In a dance club full of physically active persons, Deadmau5 may be least active, as he simply goes through the motions as the music plays. But isn’t that precisely what everyone else is doing, both in the club and out of it? Doesn’t the usage of common, commoditized items like the laptop, coupled with Deadmau5’s freedom to dance (like anyone else) while his prerecorded set streams over the speakers, make him just another one of his fans? One may struggle to determine if his routine is “real” or even what school of “realness” he would be validating if it were, but struggling with the “realness” debate is not an end to itself. Rather, it is usually the sign of a genre that still requires additional norms from musicians, critics, and listeners alike in order to have its critical profile enhanced, its sound refined, and its “realness” no longer questioned in light of the ensuing maturity.
“Mobile-first” and “mobile-only” are almost clichés in terms of current app design. But overuse of “mobile” language aside, iOS and Android users have definitely benefitted from this new focus from developers on producing software that exploits and respects the unique capabilities of smaller devices. Maybe even too much so: I recently combed thru my app drawer and felt overwhelmed by the nearly 100 apps – most of them both beautifully designed and easy to use – in it. My first instinct was to simply cut myself off from many of the services provided by these apps, so as to simplify my experience and reduce app count. I initially thought about completely ditching RSS reading and some social networking, for example.
Ultimately, I opted to do something different and instead I redistributed my workflow across my mobile device (a Nexus 4), my Mac, and my wifi-enabled Wii U. Although many of the apps and services I was using had versions available for both Android and OS X (and Web), I decided to restrict many of them to only one of my devices and ignore its other versions. So for example, I kept the mobile Google+ Hangouts app, but eschewed its desktop Web/Gmail version, and likewise kept my desktop RSS reader (Reeder) while ditching my previous mobile RSS clients.
The most difficult, yet most rewarding, part of this process was determining which apps and services I could remove from my phone and perform only on my Mac or Wii U. Amid the swirl of “mobile-only”/”mobile-first lingo,” I reflexively felt that I was selling myself short by offloading many of the excellent apps and services I used onto my relatively old-fashioned Mac and my dainty Wii U, but the experience has been liberating. I have improved my phone’s battery life, reduced clutter in its launcher, and restored some piece of mind: there are fewer things to blankly stare at and anxiously check on my phone while on the train, at the very least.
More importantly, I know have a firmer sense of what I want each device, with its accompanying apps and services, to do. The productivity bump and happiness that I have experienced has also made me realize, finally, why Windows 8 has flopped. Trying to treat all devices the same and have them run the same apps is a recipe for poor user experience and too many duplicate services. It becomes more difficult to know what any given device excels at, or what a user should focus on when using it. If focus is truly saying no to a thousand things, then it’s important to say “no” to certain apps or services on certain platforms. Steve Jobs famously said “no” to Flash on iOS, but one doesn’t even have to be that wonky or technical when creating workflow boundaries and segmentation in his/her own life: I’ve said “no” to Web browsing on my Wii U and to Netflix on my phone, for example.
With this move toward device segmentation and focus in mind, I’ll finally delve into the tasks that I know do only on desktop or in the living room, so as to relieve some of the strain and overload from my mobile device. I perform these tasks using only my Mac or my Wii U, and I do not use their corresponding apps or services on my Nexus 4.
RSS can be tricky: you probably shouldn’t subscribe to any frequently updated sites, since they will overwhelm your feed and leave you with a “1000+ unread” notification that makes combing thru the list a chore. Rather, sites that push out an update once or twice per day (or every other day) are ideal material for RSS. Rewarding RSS reading requires you to have specialized taste borne out of general desktop Web browsing (see below), as well as a tinkerer’s mindset for adding and subtracting feeds. It’s a workflow meant for a desktop.
Granted, there are some good RSS clients for Android: Press and Minimal Reader Pro spring to mind. However, neither is great at managing feeds due to their minimalism and current reliance on the soon-to-be-extinct Google Reader. Plus, I’ve yet to find an Android rival for Reeder, which I use on my Mac an which is also available for iOS. The time-shifted nature of RSS also makes it something that I often only get around to once I’m back home, not working, and sitting down, with Reeder in front of me, and so I forego using a mobile client most of the time. This may change if and when RSS undergoes its needed post-Google Reader facelift.
The thrill of wide-open desktop browsing doesn’t exist on mobile. Maybe it’s because most mobile sites are bastardizations of their desktop forbearers, or because screen size is a limiting factor. Moreover, most mobile apps are still much better and much faster than their equivalent websites. I haven’t disabled Chrome on my phone, but I seldom use it unless another app directed me there. Instead, I prefer news aggregators like Flipboard and Google Currents, or strong native apps like The Verge, Mokriya Craigslist, and Reddit is Fun Golden Platinum.
Of this trio, only Tumblr has a first-rate Android app in terms of aesthetics and friendliness to battery life. It’s easy for me to see why I don’t like using any of them on mobile: they all began as desktop websites, and then had to be downsized into standalone apps. Alongside these aesthetic and functional quibbles with website-to-app transitions, I also consciously limit my Facebook and LinkedIn intake by only checking them on desktop, and in the case of Tumblr, I may create content for it on my phone, but I usually save it to Google Drive (if only to back it up, which I’ll always end up doing one way or another) and then finish formatting and editing it on my desktop before posting it.
Twitter is a different story, due to its hyper-concise format. I’ll talk about it in the next entry. Google+ – which is almost completely ignorable as a standlone site on desktop – is also much better on mobile, wherein it performs useful background functions like photo backup. Mobile-first networks like Instagram and Vine are obvious exclusions as well.
Spotify is a unique case. Its Android app is certainly functional, but unstable and not so good with search. It is difficult to get a fully populated list of returned search results, and in many cases you must hit the back button and re-key the search. Its Mac app is much better – the gobs of menus and lengthy lists are right at home on the desktop. For listening to music on my Nexus 4, I use Google Play Music, where I have a large, precisely categorized personal collection accessible via a clean UI, and the terrific holo-styled Pocket Casts, which I use to play weekly trance podcasts from Above & Beyond and Armin Van Buuren, among others.
I’m not a huge fan of Netflix on tablets or large-screen phones. I do probably 99% of my Netflix viewing on an HDTV connected to my Wii U, with the remainder done on my Mac. I can see the appeal of viewing Netflix while lying in bed, so I don’t rule out its mobile possibilities altogether. However, most of the video watching I do on mobile is via YouTube, Google Play Movies, or my own movie collection as played by MX Player Pro.
Skype is a good desktop messenger and video calling service, but it may as well be DOA on Android, especially stock Android. Google+ Hangouts (the successor to Google Talk) is much simpler, since it requires only a Google+ account and has a dead-simple video calling/messaging interface. Plus, Skype for Android is unfortunately a battery-drainer, in my experience. That said, Skype’s shortcomings on mobile are balanced by its strengths on desktop: its native Mac app is still an appealing alternative to having to open up Google+ in Safari/Chrome.
In part two, I shall look at apps or services that I now do exclusively on mobile, as well as the select group of apps and services that I use on both desktop and mobile.
It has been alleged that we are living in a golden age for creative artists. The argument goes: Kickstarter and its crowdsourced ilk have made it ever-easier for artists to obtain funding for their projects, which in other eras would have been shoved aside by various gatekeepers of taste and cost-control. This apparent sea-change has enabled niche hardware projects like the Pebble smartwatch to be funded, manufactured, and distributed, and it has also abetted the revival of the ancient adventure game genre – a genre which enjoyed a golden age back when software came in boxes, boxes that specified that the floppy-based game would only work on “color Macs.”
Of course, both projects benefit from the hyper-specific demographics that would be aware of their existence in the first place: people who use Kickstarter AND who want email on their watches AND/OR who were old enough/curious enough to have played classics like Quest for Glory IV. That’s a small, and dare I say élite, demographic. This isn’t so much democracy and it is aristocracy or oligarchy (depending on perspective and your interpretation of Greek roots) – it is a system that rewards individuals and organizations who are either already tied-into a specific demographic (as above), first/early-movers, or independently famous. It’s the same set of reasons that explains why there are so few true grassrootsily wealthy YouTube celebrities.
Meanwhile, the proliferation of music services (driven by “the internet,” natch) like Spotify, Rdio, and Google’s new Google Play Music All Access, the consolidation of the book world into Amazon’s ereader + distribution empire (in which objects are not sold but licensed, and in which alternative currencies serve to likely degrade the value of real money over time), and the centralization of “the internet”‘s apparently meager knowledge into the anonymity of Wikipedia has, as referenced in my previous entry, made it such that artists are given less reward for their work or contributions.
Coherent statements like albums or books once had the weight of momentous events: the object (and the importance of its physicality, as a disc or paperback or whatever, can’t be overstated) had an unambiguous provenance, it was something that belonged to the creator and to which others could only have access via payment or proximity (i.e., going to a concert or hearing it via radio), and, most importantly, it wasn’t consolidated and decontextualized by being forcibly folded into a stream of similar works.
The decontextualization of albums, for example, within the vast sea of Spotify is less an indictment of information overload than it is an indictment of the increasingly screwy economics of the music business. Record labels have had a rough century, having seen unbelievably profitable CD sales dry up in the face of the advent of iTunes, as well as the “open” access provided by Napster and its pirate descendants, but now they seem to be clawing back, slowly: they are the licensing gatekeepers for every streaming service (Spotify, Rdio, All Access), and those services all pay artists ever less money, meaning that the primary benefit of music being accessed (even randomly) no longer goes to the artist, but to the stream provider and to the label. As usual, the “progress” provided by the ease-of-use of these services disguises the rather harsh economic power-grabs by the persons who made them possible in the first place. “Progress,” despite all of its connotations, has no clear moral dimension.
So against this strange economic backdrop, we see odd artifacts like this:
An album poster with a label’s name (Columbia, in this case) so prominently featured feels like something from a different era: the 1970s, perhaps. The album it advertises – Daft Punk’s Random Access Memories (hereafter “RAM”) – is already one of the biggest musical and cultural phenomena of the year, even prior to its proper release here in America next Tuesday (May 21). But the pizzaz and conscious rusticity of its marketing is hardly the sign of a sea-change in how the majority of artists either make or sell their music; rather, it’s a bright emblem of how, here at the end of the rainbow of technological “progress” (the democratization of music-making via software and of music-consumption via filesharing and broadband networking), we can see capitalistic inequality writ large (I guess I could use a “pot of gold” metaphor, which would fit with the rainbow theme, but we’ll just leave that alone). In other words, ironically, only artists as big and fiscally secure as Daft Punk could afford to indulge the older, more democratic, and more label-centric model (from the 1990s and earlier) of music’s economics (physical units sold for higher prices) that is under siege from those labels and technologists.
The New York Times summarized the current situation as such:
“Of course, the intangible qualities of feel and vibe exalted by Daft Punk are out of reach for most of today’s young music makers, whose do-it-yourself dance tracks rely on the technology that propelled Daft Punk’s career in the ‘90s. A kid in a bedroom with a laptop and software can make records that sound like a million bucks. Making music the way Daft Punk has for “Random Access Memories” actually requires a million bucks, or more.”
While it’s arguably true that DIY synths and setups have allowed thousands of persons to make high-quality dubstep and house music during the 2000s and 2010s, Daft Punk themselves were never particularly reliant on “technology” (here assigned the agency that I recently ruminated on and rejected) or a particular workmanlike ethos. Even their early work attracted much attention from labels, and Virgin Records ended up bankrolling their debut, Homework. The mid-1990s were a time of label largesse, when much money was spent to market expensive, elaborate records in genres that paradoxically both demanded and wouldn’t have existed as we knew them without such generosity – I’m thinking mainly of the widescreen drama of that era’s drum and bass (Goldie, Roni Size, et al) alternative rock (Smashing Pumpkins, Red Hot Chili Peppers), and the eventual refinement of early-90s rave and house (the type deftly reprised by Zomby on his original Where Were U in ’92; coincidentally, Zomby, now bankrolled by 4AD, is on the verge of releasing a grand double-album this summer, which is as a good an artifact of the current era as RAM) into the self-aware album-sized units created by the likes of, well, Daft Punk.
So it isn’t really “technology” that has led to the current dichotomy, in which we have DIY artists with dayjobs, technically simple music (whether bedroom dubstep or, perhaps most tellingly, the indie rock of Grizzly Bear, whose financial travails are detailed in great detail here), and anonymizing distribution channels like Spotify, YouTube, and SoundCloud on one side and wealthy artists who can afford to really explore the vagaries of “the album” and genre on the other. Rather, it’s that newfangled “technology” called money. The former category described above has had to work ever-harder and produce more and more music with ever-less reward, while the latter category has been able to bide its time and release grand artistic statements at intervals usually longer than two years. The elite, basically, now operate a model that used to be the default for everyone. The much-bemoaned death of the album is the product not of technological “progress,” but of economic disparity.
Daft Punk’s career is one long cliffs notes to the economic history of modern music. They’ve spent the last decade growing increasingly famous while “doing” basically nothing – prior to RAM, they’ve released only album, the widely panned Human After All (hereafter HAA), in the past decade, while dabbling in projects like the TRON: Legacy soundtrack. They were sampled by Kanye West and fetishized by LCD Soundsystem. Their fame accrued not via the release of material or frequent touring, but by their idolization by the music press and their fellow prominent artists. This tack recalls how America’s rich gain income via methods like carried interest, rather than the traditional, more optically pleasing means of income tied to work-hours and visible exertion.
So how should we understand RAM, eight years after HAA? There probably won’t be another album this year (other than perhaps the ever culturally-aware Vampire Weekend’s third album) that requires more backstory and context to digest. Basically, RAM is the next part of a conversation that began on HAA and maybe even partially on 2001’s strangely acclaimed Discovery (“strangely,” because opinion was so divided at first and only seemed to swell as the cultural hubbub around the band grew over the subsequent decade). HAA was described by its creators as “pure improvisation,” perhaps not the most intuitive (to listeners) terms in which to analyze an album marked by its almost robotic repetitiveness and overwhelming irony borne out in songs entitled “Emotion” and “The Prime Time of Your Life.”
But HAA was aesthetically raw, with tape hiss on songs like “Make Love” and prominent cheap-sounding 1970s guitars on “Robot Rock,” tied together by its almost comical but seemingly authentic love of Black Sabbath’s “Iron Man.” It was a human effort, in terms of its ties to consciously “analog” sounds like guitars, uneven production/mastering, and occasional freakouts (the ending to “The Prime Time of Your Life”), but it used these relatively low budget techniques and approaches in the service of making a statement about the trends toward homogenization and automation in contemporaneous big-budget music (they were right: the likes of Drake, David Guetta, and Calvin Harris all dominate heavy-rotation radio formats with many of the same homogenization techniques and nods to electronica predicted on HAA, plus the latter two in particular have benfitted from the EDM festival circuit that Daft Punk brought to life after HAA). Now, with RAM, they’ve flipped the script by using expensive, painstaking production (often requiring theatrical effects) to inject humanity (artificially, it could be argued) into an overblown record whose clearest genre roots are in the infamously anti-human/anti-authenticity disco genre.
Yes, disco. Anyone even mildly interested in RAM has likely already heard “Get Lucky” played to death, likely on Spotify, where it broke all sorts of records. Jaron Lanier has posted some thoughts on the anonymization made possible by Spotify and similar services: that they have made it less easy to discern the source or author of certain material, due to the decontextualization made possible by “unlimited” music that taps into a bottomless pit of material. And, with “Get Lucky,” I got that feeling, since it really has almost nothing to identify it as a “Daft Punk” track, other than some robot voices near the end. Otherwise, it’s all Nile Rodgers disco guitar-plucking and Pharrell Williams’ libidinous come-ons. It’s catchy, but it’s just another serviceable track to throw into your Spotify stream. The Verge wondered if Daft Punk could bring back the album with RAM, and I think the answer is “probably not,” not for lack of trying, but because it has an internal identity crisis.
“Get Lucky” is the exception rather than the rule on an album marked mostly by long, mawkish nods to synthpop and disco. The endless “Giorgio by Moroder” is a monologue by the titular producer when ends in some gratuitous guitar noise, while “Game of Love” and “Within” have a studious sadness reminiscent of The Buggles, except more grating and boring. Pharrell’s other contribution, “Lose Yourself to Dance” and the return of Discovery star Todd Edwards on “Fragments of Time” contain the most obvious nods to the band’s past, particularly Discovery, which thanks to “Harder, Better, Faster, Stronger” and “One More Time,” remains perhaps their most culturally prominent album. If the theme of HAA was a self-contained band making improvisational music, then RAM is the opposite, filled with guest stars who seem to be on different pages and who, at the same time and paradoxically, seem to lose some of their distinctiveness as they all sink back toward a common blandness and homogeneity. The Strokes’ Julian Casablancas appears on “Instant Crush,” which with its murky vocal filters and well-controlled Cars rhythm, is, well, basically a Strokes song. The orchestral “Touch” features Paul Williams but has a portentousness that doesn’t quite fit the album’s overall air-headed nature. It still grates, but in a different way from the rest of the tracks here.
The album seems best when it features only the core band. “Motherboard” is lushly reminiscent of P-Funk, and closer “Contact” deftly uses a sample of astronaut Eugene Cernan’s voice. Opener “Give Life Back to Music” is spritely and energetic, fusing the roboticness of HAA with the pop of Discovery; its title also seems to sum up the album’s credo.
But the band really already “gave life back to music” in their previous three albums, which (sequentially, from Homework to Discovery to HAA) excavated 1990s house, 1970s/1980s synthpop, and loose-limbed rock. They challenged (if only naively) the notion that there was a coherent “past,” “present,” and “future” in music by pitting Black Sabbath riffs against ProTools or Barry Manilow samples against sequencers. It all rose above mashup or hybrid, too. The band’s NYT interview reveals a noble goal for RAM: to achieve a brand of craftsmanship that disappeared from the mainstream after the advent of digital music with the CD, in turn perhaps showing that the alleged never-ending wave of technological “progress” has done little to enhance the emotional value of music.
Sadly, I’m not sure that they succeed in this project, not only due to the album’s scatterbrained musical palette and array of guests, but due to weaker tracks like “Doin’ It Right,” featuring the characteristically annoying/acquired-taste shout-chant vocals of Panda Bear, or palette-cleansers like “Within” and “Beyond” which seem to overstay their welcomes. There are, to be sure, tons of nice details in this music, from the scratchiness of its guitars, to the wind instruments on “Motherboard,” or the glittering opening of “Give Life Back to Music,” which for me recalls the very 1970s pomp (Eagles, Fleetwood Mac) that the band have cited as inspiration. But the album is caught in an odd no-man’s land, having neither the coherence and flow (and economy – something that 74-minute RAM lacks) of a would-be pre-CD model like Rumours or Hotel California, nor the feeling of novelty (however superficial – it is the in-the-moment sensation of newness that matters here, I think) that electronica has been able to provide, via technically sophisticated methods, for nearly 20 years now.
So RAM is a defiantly anti-progressive work that tries to eschews the conventions of contemporary dance and electronica, but for the first time in Daft Punk’s career, it shows them breaking their usual agnosticism toward the flow of time (as described above): they too visibly give up the present to try and dredge up traces of disco, 70s AOR, and even of the careers of artists (Casabalancas, Pharrell) whose careers arguably peaked over a decade ago. RAM is nice retroist record, but it could have been more.
Ultimately, I think we have come to expect too much of this band and its abilities. We want their music to be some grand commentary on humans and robots, on emotion and automation, but I hope that I’ve succeeded here in pointing out that the most notable aspect of Daft Punk is not their music, but their cultural status and the ways in which musicians, writers, and listeners try to inject their own confusions about life from the 1990s forward.