Many iconic games can be grasped in just a few seconds, yet can fascinate players for years, either because of their novelty (Super Mario Bros., Ocarina of Time), their difficulty (Ghosts n’ Goblins, Castlevania), or their seemingly endless skill curve (Tetris, Dr. Mario). Flappy Bird hits those last two categories hard. Some redditors have scoffed at Flappy Bird’s difficulties, referencing one of the very games I mentioned above as evidence of a truly hard, bygone era in gaming, but they’re wrong – this is a tough game for the ages, in large part because it’s imprecise. You never quite get a good feel for how high your bird is going to flap, and as such you bump into a pipe lip and it’s all over.
All games used to be hard – because of hardware
Before the advent of precise controllers – which relay took off with the analog stick of the N64 – games were super hard not just because of how they were designed, but because the hardware was working against you. Picking up an NES/SNES controller now is quaint – the buttons are stiff, and I’m all the more impressed that games such as FF3/6 could pull off things like Sabin’s Bum Rush requiring a 360 rotation (you HAD to hit those diagonal directions!).
But once controllers became great big dual joysticked bear claws for Xbox 360-playing bros, games went soft. Unless the game was just sadistic, the precision of having tons of trigger buttons (for hairpin reactions to enemies) and analog sticks would let you just grind through until you finally cleared the area/completed the task. Elaborate save systems gave each game its own de facto save state/cheat mechanism (a la an emulator), but in a way, all these software changes were a result of fundamental hardware changes.
It’s odd then, that it’s taken this long for a mobile game to reprise the truly rage-inducing difficulty of the early home console era. After all, nothing could be more seemingly primitive than having no buttons at all – just a touchscreen. But rather than force you to do tons of difficult tasks with just your free hand (something akin to Ryu Hayabusa’s wall jumps), mobile games have been content to let you fling birds or clear away saccharine sweets.
Flappy Bird is a revelation is in this respect. It makes you jump, so often in vain, to clear lots of pipe pairs. There’s nothing to the control scheme other than tapping anywhere to jump, and letting go to fall at a surprisingly rapid rate. And yet the control scheme, like the ones in those old NES/SNES games, is clearly struggling to keep up with what the game needs you to do.
The arcade effect
Flappy Bird is a lot like an arcade game, and not just because of its side-scrolling Gradius-like action and old school graphics. Arcade games were understandably hard as hell – how else could they get you to keep spending quarters? – and their legacy exacerbated the insane difficulty of early console titles such as Ninja Gaiden. Flappy Bird is like something from 1989.
The only thing that makes it seem like it came from 2014 instead is the presence of an ad network. It’s a free game, but has to make money somehow – mercifully from ads, then, and not increasingly annoying in-app purchases.
There’s been a bit of debate about the effect of IAP on games recently, with some saying it’s destroying the industry and others quipping that arcades were the original IAP and kids these days don’t appreciate that. I think the latter article misses the point by focusing too much on economics rather than quality of gameplay (plus it trots out the old falsehood that Nintendo requires brick-and-mortar offices for indie developers).
Moreover, many arcade titles gave great value for only a small upfront investment, and their successors such as Flappy Bird let you skate by with only handing over your details to an ad network. Today’s IAP games will barely let you breathe without nagging you to buy more donuts, gems, or gold.
Fortunately, gaming is still a young industry, and with more consoles likely on the way from Amazon and Apple, business models are sure to change. I just hope it’s more like Flappy Bird – both in gameplay and economics – than Clash of Clans.
^ That’s a compliment, not an insult. The similarities between Nintendo and Apple seem overwhelming at first blush:
- They both develop tightly integrated hardware/software experiences. Apple’s minimalist, Rams-inspired aesthetic is an unmistakable as Nintendo’s dorky neoclassicism.
- They share conservative attitudes toward specs. The iPhone didn’t have LTE til late 2012, and still has considerably less RAM than its Android rivals; the best-selling Wii was standard-def.
- They’ve both had to compete with Microsoft, with varying levels of success. Apple has basically defeated Microsoft in mobile; Nintendo won a surprising victory over the Xbox 360 in the seventh generation, but the Wii U’s prospects don’t look so good against the upcoming Xbox One.
For these superficial similarities, Nintendo attracts a lot of attention (most of it negative) from Apple-centric bloggers who are eager to suggest remedies for Nintendo’s current struggles (also, many of these individuals are of an age that would have been the prime audience for Nintendo’s gold/silver ages with the NES and SNES, respectively). Perhaps they also see Nintendo’s predicament as similar to Apple’s dismal 1997, when it needed Office and a cash injection from its main rival just to stay afloat.
But there are a number of differences that make the Apple/Nintendo comparison faulty:
- Making one’s own hardware is a given for the dedicated gaming industry’s major players, and it alone does not make Nintendo special or different from its rivals. Starting with Atari, and continuing on to Sega, Nintendo, and Sony, if you made a gaming platform, you made your own own hardware. Even Microsoft – a software company, at least during its heyday – had to delve into hardware as an entry fee to the console business. In this respect, the gaming world is a lot different than the consumer/enterprise software realm, in which software-first or software-mostly companies like Microsoft, Google and Facebook can wield great influence without dabbling in hardware (tho that is certainly changing)
- Accordingly, Nintendo is not a hardware company. It’s a software company that makes hardware that makes its software better. Look at the N64 controller: examining its analog stick and trigger button, you just know that Nintendo’s hardware team was future-proofing it for Ocarina of Time and Super Mario World 64. In this respect, Nintendo is the opposite of Apple, which is a hardware company that makes software that enhances its hardware – iOS is much like a virtuosic exercise in preserving battery and maximizing touch technology.
- The suggested remedy for Nintendo – that they make iOS games – is appropriately the reverse of the remedy that Apple needed and got back in 1997, i.e., the porting of Microsoft Office to Mac OS. Since Nintendo is a software company at heart, it would seem to make sense that, if desperate, they take those assets to other platforms; by contrast, Apple is a hardware company, so dire straits fittingly translated into trying to attract more software to their own platform.
- If it’s not clear by now, you should realize that Nintendo is uninterested in making a platform. It makes toys and the workshop/play space in which those toys are used. That’s the total opposite of what Apple has done, especially with iOS.
John Gruber and John Siracusa recently had a great deover Nintendo’s future. Gruber argued that the lucrative DS line could be jeopardized by its basic requirement that users carry a dedicated handheld in addition to their phones – I can definitely see this happening. But Siracusa hit upon some subtle advantages that Nintendo may still have, especially in terms of gaming experience.
Discussions of Apple vs. Nintendo (or Nintendo vs Nokia/RIM) often lead with anecdotal stories like “my kid doesn’t know what Nintendo is,” which I think are unhelpful. The tech literati are not really Nintendo’s audience, and their children are probably a small subset of all Nintendo fans. The recently announced 2DS is not a device to be analyzed with the same eye as a new iPhone or Nexus device. Still, I’ll contribute my own – I’m already fatigued of Android/iOS gaming. The limited input mechanism (touch) means that games cannot do as much with on-screen information or elements since fingers get in the way, and the freemium pricing of so many mobile games means that they often do not over immersive experiences but rather play–by-ear arcade-like ones.
Sure, there was a time when people defended BlackBerry’s hardware keyboard as a non-negotiable feature for plowing through “serious work” and email. But as Siracusa pointed out, hardware keyboards were superseded because software keyboards imitate their every last functionality while adding exclusive features like predicative typing. Touch screens cannot do that with gaming controls, if only because there’s no QWERTY-like standard for controls: every controller may have buttons, but their arrangements and numbers are radically different from one system to the next. The fact that Nintendo has realized this has been a historic source of strength – it’s hard to appreciate now, for example, how groundbreaking that N64 controller was in introducing analog sticks to the console world.
The variety of controller layouts is matched by the variety of software that they power. Games are, on the whole, a much more fragmented sector, in terms of design and input, than mobile apps. What are mobile devices used for? Standards like email and Web browsing, mostly similar social media clients with a standard set of gestures, passive content consumption. They don’t need varied controls or inputs because their specialty tasks don’t require them
Now, imagine Nintendo trying to bring its quirky, unique sense of sophisticated hardware-specific software to iOS, a platform which takes for granted that no third-party app is more special than any other one and as such. Even with an iOS controller peripheral, I don’t think it would work – not only would it de-incentivize customers to purchase Nintendo’s own hardware, but it would create a bad experience, topped off with the inevitable long string of 1-star App Store reviews bemoaning users’ unawareness that they needed a separately sold item to play the $14.99 app they purchased.
Whether Nintendo can make its traditional approach work going forward is a separate question from whether porting software to iOS would be a good idea. For now, the company appears to be in sound financial shape, and even a minor rebound in Wii U sales would help buoy its already robust DS business. And mobile device sophistication need not be synonymous with consolidation – a breakthrough gaming device, like the original Wii was 2006, could fit alongside the growing fleet or smart wristbands, HUD displays, and smart watches that co-exist peacefully with phones and tablets.
Notifications are the one thing that Android has always done better than iOS. Even Android 1.0 from 2008 had status bar notifications, a feature that the iPhone et al did not get until the addition of Notification Center in iOS 5 in late 2011, at which point Apple opted for the familiar pull-down gesture that was already widespread on the seas of Gingerbread and Froyo phones.
iOS 7 has a lot of promise in its revamped approach to notifications, but Jelly Bean raised the bar and has kept Android in the lead on this score at least. Expandable notifications gave a user a window into rich content and enabled an endless array of quick actions. Want to type out a quick text without opening the SMS app? Want to archive an email instantly? Want to view a list of items? There’s an
app notification for that.
In a way, I think that Google’s insane focus on notifications was the first step toward bring Android at least level with iOS in quality. The system notification UI – so neatly grouped in that pull-down menu – provided a common framework from which a user could interact with apps without having to actually enter the apps as much, hence mitigating annoyances like aesthetic gaps between iOS and Android versions or the shittiness of garbage collected languages (read: Java) on mobile and in the hands of devs who don’t do manual collection.
Here are twelve apps and two Chrome extensions that can up the notification game.
What it is: A top-shelf weather app.
Notification perks: 1) persistent, regularly updated temperature figure in the status bar; 2) Dashclock extension; 3) expandable weather notification with customizable icons and forecasts.
What it is: A handy sleep tracker app that catalogs your deep and light sleep percentages and also features an alarm clock.
Notification perks: 1) sleep tracker toggle in notification bar
What it is: A custom notifications app.
Notifications perk: 1) expandable; 2) lists; 3) alerts; 4) photos
What it is: An app that can receive pushed images, files, and/or lists from its accompanying Chrome extension.
Notification perks: 1) Dashclock extension; 2) expandable notification for lists and image previews
What it is: a way to connect your Android notifications with your desktop instance of Chrome or Firefox.
Notification perks: shows all Android notifications in a popup in the lower-right in Chrome or Firefox. I love using this on Chrome OS with its extension.
What it is: A battery conservation and tracking tool.
Notification perks: 1) expandable notification with usage chart/time remaining estimate; 2) Dashclock extension; 3) Daydream; 4) lockscreen widget
What it is: An SMS client and a huge upgrade over stock (and it gets updated all the time)
Notification perks: gee, where to begin: 1) Dashclock extension; 2) multiple widgets; 3) persistent quick text notification in status bar; 4) expandable notifications with read/reply options for new messages; 5) scrollable widget that can be overlaid inside of any app.
What it is: A music streaming service. I assume you’ve heard of it.
Notification perks: 1) expandable notification with forward/backward/play/pause control and add to playlist button
What it is: A podcasting client.
Notification perks: 1) expandable notification with rewind/fast forward (not just forward/back) and play/pause controls
What is: A lockscreen notification center which I’ve written about here.
Notification perks: Out-of-the-box compatibilty with Gmail, SMS, weather, Google Calendar etc. Customizable with numerous extensions.
What it is: A way to bring the Moto X’s distinctive (and somewhat intrusive) notifications to any Android phone
Notification perks: 1) screen wakes with specific information about each notification’s content.
Vine has been available for Android for a couple of weeks, and my verdict is that it just does not provide a good experience at this time. Sadly, Vine’s shortcomings are not only indicative of the age-old, ongoing quality gap between apps with versions on both iOS and Android, but it explains them, too. Its simultaneous failures of design and massive popularity are a good microcosm for Android itself and its characteristics. To wit, Vine for Android:
- has no limit on caching and as such can occupy 100s of MB of on-device storage
- doesn’t have a push notification system: it notifies you via rich Jelly Bean notification that your video is being uploaded (good), but is mum if someone likes or comments on your post (bad).
- is full of spam and fakes (I guess this is to be expected; even Instagram is overrun by follower-mills and spammers now)
- doesn’t yet support front-facing camera or tags.
- feels gummy and unresponsive when navigating to some users’ profiles, to the extent that it won’t even show their posts sometimes.
Many of these issues, like front-facing camera support, are likely to be addressed in updates. However, the overall sloppiness of the design makes Vine’s arrival on Android a pyrrhic victory of sorts. Yes, we got a hot app, but its developers treat us as if we don’t respect quality or good design. They treat Android users this way because for now a unified, huge, design-conscious Android audience sadly doesn’t really exist.
The best Android apps, other than the ones Google makes, are often either exclusive to the platform, like Falcon Pro, Eye in Sky, or Friday, or they exploit something unique about Android, like UCCW, Dashclock, or other widgets, or they capitalize upon some odd platform disparity between iOS and Android, like Pocket Casts, which takes advantage of less competition on Android and lack of a Google-made podcasting client. Whether they achieved success via exclusivity, astute platform exploitation, luck, or all of the above, Android’s best apps (a category that includes all of the apps listed above, sans Vine) are often targeted at such a niche audience that they aren’t so much “Android apps” as “Nexus/power-user apps.” They often require at least ICS or even Jelly Bean to even run, but more importantly, they require a user who cares about Android and who didn’t just pick up her/his device because AT&T said so or because it was so cheap.
Accordingly, it almost doesn’t make sense to talk about “Android” as a monolithic platform. Many Android users are on an older OS version or don’t even know that they’re running Android: their phone is just a phone that can do email and Facebook and maybe a few other things. Android’s fragmentation certainly exists, but it’s fragmentation of intent more so that fragmentation of OS version, the latter of which I think is just a product of the former, since not enough users care enough (or need) to seek the latest version of Android. Android isn’t “good” yet (if by “good” we mean “characterized by predominantly active, non-incidental, Android-first users) because of this disparity.
A year and a half ago, someone told me that Android was “the new Mac,” that is, that it was a trendy alternative to iOS, which had become so widespread that it could be regarded as the OS for normals. This struck me as an odd statement at the time: how could Android, with its huge user numbers, possibly be compared to the Mac back when it struggled to keep up with the PC? Isn’t Android the PC equivalent in the smartphone wars, the equivalent of a commoditized beige box? Well, no, depending on what specific “Android” demographic you’re talking about, and she did seem to be talking about the niche Nexus user demographic.
First of all, the best Android hardware and the latest Android software both have an elegance and sophistication – likely driven by Google’s own design chops – that Windows has never had. But more to the point: the number of users who actually know that they are “Android users” and not “Droid users” (i.e., users who only have a superficial connection to the brand via Verizon’s massively successful 2009 campaign) or “Samsung users” or “phone-that-emails-and-Facebooks users,” is almost certainly small. There have been roughly 3 million Nexus 4s sold all-time, next to nothing compared to even the Galaxy S4’s haul for May alone: and that’s considered a blockbuster by “stock Android” standards!
Nexus users like me comprise a hugely active and outspoken (especially on Google+) part of what the world sees as the “Android community.” We are just the tip of the iceberg, and interpreting their power-user, anti-Apple, customization-crazy intents as the modus operandi for the hundreds of millions of incidental and accidental Android users is misguided. Like the unseen part of an iceberg, those users elevate the power-users to greater visibility, since the media cares about Android seemingly because: 1) it’s not iOS; 2) it’s popular. Those users are perhaps like 1990s PC users, but the ones on the tip, the Nexus types, are perhaps more like Mac users: outnumbered (by their very different “Android” brethren and, if one grants this differentiation of populations within “Android,” then by iOS users, too) and outspoken.
So the Nexus users will complain about Vine’s shortcomings, while everyone else on Android – the incidental customers or users on older versions – won’t care and will download and use it anyway. The latter group is the reason why Vine for Android even exists (you don’t see Vine for Windows Phone, do you?) but also the reason why its design isn’t on par with the iOS design. “Android” doesn’t have just one addressable demographic, since its different user groups may as well be using (and being conscious of) different platforms altogether, and because of this, we get the only-on-Android odd scenario of a massively popular app that, given the chance to do so much, does only the bare minimum and gets away with it, despite protests from the minority.
In the city I live in, Chicago, the owners of the historic Congress Theater came to an agreement with the city banning EDM from the venue. All acts that play there must now use “traditional instruments” during their shows.
Like genre skeptics of the past who have questioned the value of unfamiliar music and derided its creators as unauthentic charlatans, Chicago’s powers that be have provided an opportunity to think about authenticity in music. Why do critic resort to strong language about reality itself – “real,” “true,” “only” – when discussing low-stakes topics such as whether Deadmau5 is a working-class DJ or if a heavy metal is allowed to use synthesizers?
It’s like the 2000 U.S. presidential election all over again – are musicians persons with whom listeners would enjoy having a beer, yet, at the same time, do these celebrities exude sufficient serious to be accepted into The Canon (if such a thing even exists in EDM; it’s sort of a rockist construct). Since music criticism is so indeterminate, the only methodology for vetting ascendant musical acts is to wrack their music for tell-tale signs of a laborious creative process (hence, “traditional instruments) or relation to a specific social class (Born in the U.S.A. and Parklife are good examples from the rock album annals).
This critical approach toward everything from jazz to EDM has nudged artists to prove their worth – and their down-home (read: white and probably rural) – temperaments. Even synth-pop bands have proclaimed that they won’t succumb to the infinite DIY possibilities afforded by iOS music apps and instead soldier on with real synthesizers. Likewise the unexplainable influence of Mumford & Sons even made folksiness an important litmus test even for Group Therapy-grade acts for a while there. Above & Beyond themselves did acoustic shows last year and released an acoustic artist albums this year.
Genres and Society
Genres aren’t static, but their paths are carved not only by shifts in consumer style and taste, but also by social and demographic change. Jazz was incubated during the urbanized, prosperous 1920s in America, while rock and roll became the logical musical extension of 1950s urban sprawl, as the sound of America’s white population expropriating and exporting blues and jazz, which had previously been the specialties only of the country’s extreme rural and urban poles, to the suburbs.
Just as societal change can easily incite refuge to defensive terms such as “real” and “traditional” to bemoan the loss of an ideal that may have never existed, musical evolution brings out from the woodwork the authenticity scolds who decry new stars for, at best, violating good taste and, at worst, endangering everyone’s sanity and livelihoods. The Atlantic had an excellent piece on the rise of EDM (electronic dance music) as the new rock n’ roll, and in doing so, it nicely summarized the dark critical history of new genres being born (emphasis mine):
“The most obvious point of comparison…is how this new movement has been received by the majority of people who consider themselves possessed of good taste. In the 1920s, jazz was preached against from pulpits and editorial pages as the devil’s music, its crazy rhythms jangling the nerves, speeding the degeneracy of American civilization, and responsible in part for the ongoing failure of the temperance movement. In the 1950s, rock and roll was sneered at as jungle music, provoking lascivious displays unfit for the Ed Sullivan Show as well as responsible for juvenile delinquency and reefer madness. In the 1980s and ’90s, rap music was censured as violent thuggery, non-music…[B]ut most of the current non- parental criticisms of EDM are made in purely aesthetic or culturally derogatory terms: Dismissive, class-based coinages…are employed to wall off “real” electronic music as the preserve of the specialists.”
Perhaps one should pause to note the surreality of wide-bore, public discussions of “realness” within electronica, since electronica itself was once pilloried, or at least dismissed, by artists and critics alike as something too mechanical, fake, and European to be acceptable. Up until the release of their block-bluster The Game (1980), Queen emblazoned each of their 1970s LPs with the a disclaimer that no synthesizers had been used on the record. The White Stripes reprised this school of thought in the liner notes to Elephant (2003), which shouted, to no one in particular, that no “computers” had been used to make the record.
Computerized and Real Music
“Computer” really is the key term here, more so even than “synthesizer” or any more specific descriptor. Early electronica, especially the West German variety of Kraftwerk and Klaus Schulze and the American creations of Silver Apples and Cromagnon, announced itself by its reliance on obviously strange – non “traditional,” certainly – instrumentation that gave proceedings a computerized, alien sound, whether synths were in play or not. Sometimes the entire arrangement, rather than the individual sounds of a synth, made all the difference in distinguishing a song or album from pre-electronic music. For example, on Autobahn (1974), Kraftwerk juxtaposed traditional violins and guitars with samples car sounds and synths to demonstrate the possibilities inherent in new instruments and methodologies. Only a few years later, however, Kraftwerk had gone completely computerized on Radio-Activity (1975), and then issued an entire concept album that ruminated on the computer’s use cases in government, mathematics, and music itself on Computer World (1981), right on the eve of the widespread adoption of digital recording and playback technology that attended the CD format’s birth in 1982.
From The Man-Machine (1978) onward, Kraftwerk also adopted the mannerisms of robots, seemingly forced into their new mechanized existence by the growing centrality of computerized and automated processes in music creation. What had begun as the usage of a simple synthesizer had progressed into the usage of loops, drum machines, and more sophisticated recording techniques. It became hard to know where the human input (initially assumed to be composition and performance) ended and computer input (likewise assumed to be a means of enhancement and refinement) began. It was no coincidence that Kraftwerk waited until 2008 to issue a definitive remaster of their entire catalogue, as Ralf Hütter in particular became obsessed with getting the sound just right in light of newly available digital editing and production tools.
More so than any other outfit, Kraftwerk embodied how the issue of realness affects musical pioneers. Their posturing as robots was an ironic take on the conundrum that electronic musicians face in the face of both authenticity-obsessed critics and the persistent, decades-long dominance of rock and roll and indie rock within the music press. The fixation of publications such as Rolling Stone with lists of the greatest singers and guitarists, along with the enormous critical reputation afforded to indie musicians, keeps alive the question of how much realness factors into aesthetic evaluation. It appears that process in particular – the steps by which the music was created, and how discernible said process is to the listener – is a prime determinant of realness. When in doubt, we can consult Urban Dictionary (bolded emphasis mine) on this issue:
“real music includes anything that goes through what is called a pure process towards becoming music that sounds nice and does not bore the listner [sic] involves singing and not rapping. Usually involves: guitar, bass, drum.”
Via sarcasm, Urban Dictionary summarizes 60 years of rock criticism. It excavates the fading cultural currency of rock music by pinging its most basic and obvious traits – the guitar-bass-drums trio setup – and invests them with the unique power to produce “real” music, a label that early 1950s critics might have reserved exclusively for less guitar-based music, like jazz.
Books, EDM and Realness
Similar struggles for a definition of “the real” exist in other cultural fields, such as in the case of Jonathan Franzen complaining that ebooks don’t have the same permanence as the written word. There one finds characteristic appeals to soft classism (“real readers”) and authenticity (“literature-crazed). This broad struggle over realness in culture extends to EDM, which is currently the most prominent form of electronic music, and accordingly it is fertile ground for producers in heavy-rotation pop and hip-hop who are seeking to cross-pollinate their tracks with club flair. This piece, however, focuses more on how the authenticity debate affects EDM disc jockeys (DJs), who are the main EDM performers and composers. The DJ abbreviation itself is accidentally telling: it has nearly truncated the musicians’ ties to real physical discs and become a word in its own right, even if many DJs do go on using real discs (usually vinyl LPs) and their corresponding playback equipment, rather than a completely digital setup.
EDM is a conveniently broad umbrella under which to shelter the diverse genres of house, trance, techno, acid, dubstep, and what used to be dismissively called IDM (intelligent dance music). House music arouse in late 1980s Chicago, while trance was at least initially a much more European phenomenon, coming to the fore in the early 1990s with The Age of Love’s titular masterpiece. The late 1980s and early 1990s were a time of rapid transition in how music was recorded. Although editing software ProTools had not yet become mainstream, the music-making process was becoming increasingly automated, with hip-hop as the most brazen exponent of music that could float across a sea of carefully curated samples. Whether the samples were the hyper-specific record collection allusions of the Beastie Boys’ Paul’s Boutique (1989), or instead the vaguer synth-bass-drums issuances of house, making an album became as much about one’s abilities to curate an aural collage – and make as apparent as possible one’s diverse yet classical tastes – as about one’s abilities to perform with the human verve and virtuosity associated with jazz, classical, and rock; the idea of a “solo” doesn’t really exist in EDM.
Accordingly, the aesthetic critic would not be raising the critical stakes by criticizing the pitch of a house diva or other EDM vocalist, or by bemoaning the technical repetitiveness of a jam. The latter term is imprecise, but it may suffice if only to construe EDM as a hipper, more urban update on the rock jam, that is, a long-form construction (most EDM albums would qualify as “double albums” in the rock sense) that evolves in often subtle ways and which aims to capture, comment on, and finally re-imagine a highly specific setting, whether Ibiza or the Renaissance UK club. Terre Thaemlitz has stated that house music is “hyperspecific” and meant to convey a particular kind of post-1980s angst. Since EDM in this classical sense is super-local, like politics, then the onus for accurate reproduction and commentary falls on the DJ, whose mixing skills are arguably of no use if he doesn’t have an authentic relation with a particular location and audience. Being a DJ is really like being a politician or a real estate agent.
DJs: Just like Politicians
Like politicians, DJs have come under increasing pressure in the last decade to present themselves as authentic, “real” persons who talk, tweet, and perform just like their fans. The Verge once commented on the celebrity of the Canadian DJ Deadmau5 (who is the at the center of the current storm about DJ authenticity; emphasis mine):
“As a human, Joel Zimmerman epitomizes the “celebs: they’re just like us!” ethos. Fans are treated to rambling, very-unedited, “lol” and emoticon-laced posts on Facebook and Twitter. His face is an angular vessel of pure emotion, nearly always dominated by an ear-to-ear grin that communicates just as much as the words that come out of it, another testament to context bringing more to the table than words. His body, a lanky vessel clad in the t-shirts, baggy pants, and ballcaps of the masses, is covered in nerdy tattoos (Space Invader, Zelda hearts, Cthulhu, Mario “Boo ghost); he needn’t do more than walk into a room to tell you what his deal is. But when he transforms into deadmau5, his presentation is stripped of nearly all words.”
So Deadmau5 is someone to whom his fans can relate. The Verge even goes on to characterize him as a latter-day arena rocker, one who has replaced guitar pyrotechnics and animalistic rock star rituals with blinking lights and repetition. Even in a non-critical assessment of Deadmau5, the issue is framed within the context of rock music.
In light of these portrayals of Deadmau5′s performative style, it becomes easy to see him as the hipster or unusually tech savvy guy DJing a fraternity party or rave. While he certainly imports the obtuse cinematic sweep and costuming of Daft Punk, as part of a tradition harking back to Kraftwerk’s own aforementioned transformation, his wordlessly curated sets nevertheless have an earthy, populist air that nicely coincides with the DIY stylings of his album titles. The populism – the carefully crafted facade of “realness” – succeeds in part because of how Deadmau5 obscures his source material, although it is worthing noting that his protege, Skrillex, courts the authenticity wonks by appealing to older, mostly critically unassailable genres like reggae, in the same way that drum n’ bass once leaned critically on jazz and ragga. The New York Times described his technique as reductionist – many of the familiar parts of dance music (can we call it “classic dance” or “traditional dance” now?) are stripped away to highlight a few flashy traits, sort of like a guitar solo cutting through the blues and jazz changes of early rock but never completely obscuring the reputable source material.
Deadmau5 makes EDM that is agnostic of any particular demographic, a strategy which would seem to run into trouble if the previous argument about house’s hyperspecific contextualism is accurate. But the opportunity to predictably decry Deadmau5 as “not a real” DJ did not fully present itself until he said that most DJs show up to their concerts and, amid the booming noises and lights, simply press play. He likened EDM (by name) to a “cruise ship” meant to convey atmosphere for fans and celebrity bandwagoners alike, which, while partially an astute observation in its probing of it the genre’s roots in partylike locales like smoky clubs or laser-emblazoned dance floors, was nevertheless surprisingly brutal, even savage, in its assessment of an increasingly intellectualized, gentrified genre and its auteurs. The backlash was swift, with David Guetta in particular hitting back at Deadmau5, while other parts of the DJ community took the opportunity to point out that the instruments and live processes available simply were not up to snuff for recreating the complex introverted processes of in-studio EDM production.
Automation and Labor
To the latter point, the invention of newer, more efficient instruments has allowed for entire genres to develop, mature, and be performed throughout history. The piano’s improvements upon the harpsichord is a particularly significant case-study. Perhaps EDM’s DJs have indeed not yet succeeded in discovering easily reproduced ways to create studio-quality live performances. But even if they had, would it have changed the tribalism and infighting over “realness” in EDM? There were plenty of criticisms of Deadmau5 that cited the “hardworking” ordinary DJs (not unlike a political ad, really) who, unlike Deadmau5, specialized in live improvisation, singing or other real and true-to-life processes that demonstrate a tangible, almost bodily link between the performer and the music being performed. This is one of the more strident examples of one subgroup’s idea of “process” dictating for everyone what does and doesn’t count as “real,” and unsurprisingly, Deadmau5 himself has characterized studio recordings as “what counts.”
In EDM, musicians may well have reached a level of automation and in-studio complexity that is difficult to reproduce live, but this conundrum is a distraction, a too-convenient frame in which to confine the more nebulous issue of how “realness” is redefined and achieved by different classes. EDM today is a strange comparison to rock music in 1966-7, when The Beatles retired from touring altogether to focus on studio experimentation that would have had been both laborious to reproduce and unpalatable. This tack led to works (now) regarded as classics, like Sgt. Pepper’s Lonely Hearts Club Band, but it is equally notable in how it shirked populism and visible, transparent process (like the live-playing of instruments on stage) for opaque in-studio control.
Contemporary DJing, and EDM at-large, remains strongly invested in placating crowds and creating atmosphere in that pre-Sgt. Pepper way, but they achieve this populism via automation rather than human labor, hence the aforementioned “just press play” sets. To appreciate the different tacks that rock and EDM have taken, simply recall the comparison in The Verge of Deadmau5 to arena rockers. In the 1970s, prominent arena rockers Electric Light Orchestra, known for the complexity of their studio works, were beset by accusations of lip-syncing and usage of prerecorded tracks. In the 1970s, did this faux-pas make ELO any less “real” that synthesizer disavowers like Queen?
The Verge characterizes Deadmau5 as someone who was ordinary and just like his fans, a portrait at odd with his metapersonality as a purveyor of prerecorded tracks. In a dance club full of physically active persons, Deadmau5 may be least active, as he simply goes through the motions as the music plays. But isn’t that precisely what everyone else is doing, both in the club and out of it? Doesn’t the usage of common, commoditized items like the laptop, coupled with Deadmau5’s freedom to dance (like anyone else) while his prerecorded set streams over the speakers, make him just another one of his fans? One may struggle to determine if his routine is “real” or even what school of “realness” he would be validating if it were, but struggling with the “realness” debate is not an end to itself. Rather, it is usually the sign of a genre that still requires additional norms from musicians, critics, and listeners alike in order to have its critical profile enhanced, its sound refined, and its “realness” no longer questioned in light of the ensuing maturity.