Advertisements

Tag Archives: Internet

Silent Film and “the Internet”

In late 2013, we watched our first full-length silent film, “The Thief of Baghdad” from 1924, starring Douglas Fairbanks. Fast-moving with an endlessly engaging score (a loop of “Scheherazade” by Rimsky-Korsakov), it’s a good “break-in” film for anyone unfamiliar with the silent era. Fairbanks excelled at swashbuckling roles, and “The Thief of Baghdad” is one of the swashbucklingest movies ever made. He dances around with his scimitar and dives into the sea to fight off monsters, too.

Since that time, we have explored a few other silent era films, including the corpus of Kentucky director and Hollywood godfather D.W. Griffith. I recently finished his “Intolerance,” from 1916, the follow-up to 1915’s blockbuster “The Birth of a Nation.” The latter rewrote the rules for feature-length films by being essentially the first feature-length film, with a continuous narrative structure documenting the before, after and during of the American Civil War. “Intolerance,” though less famous, may be Griffith’s best work.

16129443253_da177e2000_k

A screen grab from “Intolerance.”

Split
I have always liked the idea of split stories and parallel action; “Intolerance” provides nothing but for epic 3+ hour duration. There are four stories, each documenting a moment in history when intolerance of other belief systems or moral codes was the preamble to violence: there’s an ancient Babylonian story about the city attack by Cyrus the Great, a Judean story about Jesus, a French story about the St. Bartholomew’s Day Massacre, and an American modern story about a mill strike and a group of, well, intolerable moralists.

The variety of “Intolerance” makes its epic running time go by swiftly. Griffith employs many different color prints, a melange of musical samples, and some strange interstitial techniques like a woman rocking a baby in a cradle (representing the passage of time between the film’s chosen eras) and a background shot that includes what looks like the script/screenplay for “Intolerance” itself – how meta. Textual snippets are also given period-specific cards, such a tablet for the Babylonian story.

Relevance
“Intolerance” is 99 years old this year, but perhaps because of its cosmopolitan subject matter it seems less dated than “The Birth of a Nation,” which represented and embraced the retrograde racial attitudes of its period. Another thing that makes “Intolerance” seem so modern is it ambition. The budget ran well into the millions of USD – in 1916! The sets, such as the Babylonian city that Cyrus besieges, are sprawling and look great almost a century on – behind the color-tinted shots and film crackles, they now seem as old as the times they tried to depict.

Some of the film’s imagery and topics, especially in the Babylonian scene, remain relevant for 21st century viewers. The issue of whose god is mightier – Bel-Marduk or Ishtar – and the shots of people falling to their deaths while large (siege) towers topple has uncomfortable symmetry with 9/11, for instance.

Part of what is so striking to me now, though, about “Intolerance” and silent films in general, is how “Internet”-like the entire experience is. There’s the variable pacing of moving from one card to the next and reading the text, just like one would do with a webpage (with the important and obvious difference of not being in control of the direction – although one could say that people addicted to Facebook or forum arguments are hardly free from inertia in this regard…). There is the card-by-card, shot-by-shot attention to design and layout (“Intolerance” even has footnotes for some of its textual snippets!) as well.

History
Earlier this year, I wrote about “the Internet” is a term applied retroactively to a bunch of actually separate histories – networking, software, hardware, etc. – with the added current connotation as a medium through which its users receive information. It used to be called by different names – “cyberspace” is perhaps the best example of this class of outmoded labels, as it conceives of connectivity as a space rather than a medium – and really if one wants to get technical, the vague principles of “the Internet” go all the way back to the telegraph, which was a much bigger break with what came before it than, say, TCP/IP was with its predecessors.

Before watching “Intolerance,” I hadn’t though of silent film as a part of “Internet history.” But the design tropes of silent film are if anything becoming more, not less, prevalent in media. Pushing cards or snippets of content – say, Snapchat Discover, Twitter’s “While You Were Away” feature, or the stream of matches on an app like Tinder – is an essential mechanism for many of today’s mobile services in particular. Integration of video with services like Meerkat (it lets one show live video to her Twitter followers) only makes the lineage from silent film to “the Internet” more apparent.

In a way, “the Internet” hasn’t even caught up to the immersive experience of silent films, which often not only pushed discrete cards and pieces of narration at viewers (ironically, to support a continuous narrative) but also featured live orchestras in grand settings. Videoconferencing (FaceTime, Skype) and the likes of Snapchat are Meerkat strive for that same immediacy that Griffith et al captured in the 1910s.

One more intersection: For someone used to talking movies, watching a silent film can feel really lonely, because no one is talking. For me, this exact sort of silence and proneness to becoming lost in thought – for better or worse – is endemic to using “the Internet.” It’s strange, really, that in an extroverted society like the U.S., in which silence is barely tolerated in meetings etc., that so much mental energy is channeled into the inaudible actions of responding to emails or skimming BuzzFeed. I would much rather wordlessly watch “Intolerance” again.

Advertisements

Nouns, sounds, and the Internet of Things

Nouns and Greek texts
Looking back at elementary school, the earliest thing I remember learning was what a noun was. “A person, place, or thing” – that seems to cover all the bases. It’s the type of knowledge that quickly becomes secondhand, only coming to mind in cases like interpreting a sentence that contains a gerund, which is an English nouns that seem like a verb (e.g., “the happening is up ahead”).

Sixteen years after I learned what a noun was, I started reading Aristotle in Greek. Although Aristotle exerts tremendous influence on all of Western civilization – in every field from biology (which he started with his examinations of specimens brought to him by Alexander the Great) to theater criticism – I have never loved his ideas or stylistic flourishes as much as those of his teacher, Plato.

Some of his Greek texts seemed rough to me, requiring a lot of insertion of English words in the translation, whereas Plato’s writing was full of plays on words and syntactical arrangements that made it enjoyable in ways that English couldn’t reproduce. When translating, I felt like sometimes English was an upgrade for Aristotle, while it never was for Plato.

Nouns and sounds: Nounds?
I began reading Aristotle’s “On Interpretation” today, my first real brush with his work since 2007, when I was working with the “Nichomachean Ethics.” It won’t take me too long to finish, which is exciting, having recently read almost nothing but long philosophical tracts and novels.

Early on, Aristotle, like an elementary school teacher, sets the grounds rules by defining what he means by a noun. He says:

“By a noun we mean a sound significant by convention, which has no reference to time, and of which no part is significant apart from the rest.”

I don’t have the Greek text with me (I’ll try to find an image of it later) but isn’t it strange that a noun is defined as a sound? Obviously, nouns are also written, soundlessly, on paper and word processors, but, as Aristotle notes, “written words are the symbols of spoken words.” It all comes back to speech.

Sounds and good and bad writing
This makes sense when you start to think about bad writing, more so than good writing. So much bad writing and so many bad ideas emerge because they have no predecessors in speech and would sound close to nonsense if spoken aloud. I’m thinking of all that business writing about “full-service solutions providers.” Jason Fried tore into it several years ago for Inc.:

“One of my favorite phrases in the business world is full-service solutions provider. A quick search on Google finds at least 47,000 companies using that one. That’s full-service generic. There’s more. Cost effective end-to-end solutions brings you about 95,000 results. Provider of value-added services nets you more than 600,000 matches. Exactly which services are sold as not adding value?.”

All of these phrases sound horrible in conversation – even the people who write them wouldn’t utter them aloud in relaxed company. It’s like there’s nothing there; encountering the word “solutions” in text makes me instantly skip like 2 or 3 lines ahead to see if things get better. There may as well be no nouns on the page.

Inarticulate
Aristotle is helpful here, too, in a strange way:

“[N]othing is by nature a noun or name – it is only so when it becomes a symbol; inarticulate sounds, such as those which brutes produce, are significant, yet none of these constitutes a noun.”

It’s a weird image that comes to mind for me here, as I equate brutes raving inarticulately with business writers ranting about best-of-breed management structures in ghostwritten columns or ‘touching base’ in their emails. What counts as “inarticulate,” though? A liberal interpretation, I suspect, could capture so much that is bad and nebulous about writing, particularly writing about technology.

Some terms, like “the Internet,” have become so vast as to be meaningless without first trying to figure out what they’re not – what is the Internet not, when it comes to technology? As I noted a few posts ago, the term has come to bind together software, hardware, networks, and many other disparate technologies into a homogenous term.

If it’s not everything, then it’s trying to become so by incorporating every device possible, through the “Internet of Things.” Sensors, “analytics,” and, yep, valued-added services all pile into conversations about this term: All I know is that trying to write about “the Internet of Things” makes me sound like an inarticulate brute.

“Space Quest 6” and the Internet as space vs. medium

One of the distinctive things about the Internet is that no one used to call it “the Internet.” Throughout much of the 1990s, the act of accessing an IP network (likely over dial-up) was referred to as “going online,” entering “cyberspace,” or encountering “the Net,” “Web,” or just “AOL.” Then there was my favorite: the Information Super Highway.

Networks and history
Computer-driven networks had been evolving for decades by the time that Geocities et al made them directly accessible to consumers. There wasn’t a monolithic, unified network in development that whole time, though; “the Internet,” in all of its broad meaning, was a latecomer to the networking, software and hardware party that had been going on since microprocessors were invented in the 1940s.

Speaking of the 40s: I saw Ethernet inventor Bob Metcalfe speak at a conference in D.C. last year, and he half-joked that the Internet began in 1946 with the first microprocessor, making it exactly as old as he was. This quip is instructive, since, on the one hand, it demonstrates how the numerous technological inventions that makes today’s Internet possible go back many years, and, on the other, it shows how all of these developments have been retroactively wrapped up in the homogenous terminology of the “Internet” (so we have “the history of the Internet,” or “pre-Internet” instead of “the history of networking technologies and capitalistic decision making” or “pre-chip.”)

Card-processing networks and travel reservation networks, for instance, were among the disparate networks that emerged throughout the 1970s, as Evgeny Morozov noted in a recent interview. The discursive convergence on the term “Internet” didn’t happen until much later, and was never inevitable. Infrastructure control had to be handed off to the private sector and specific technologies and protocol stacks (like Ethernet and TCP/IP) had to win out over others.

Medium vs space
These days, the Internet is seen primarily as a medium. One might “use” the Internet in the same way she might use a phone line, magazine, or TV. It serves as a means of getting information, e.g., it has literally become “the media,” in a happy coincidence of terminology. But one doesn’t really occupy it; in the popular imagination, there is no longer a spatial quality to it, and talking about “cyberspace” feels anachronistic.

This wasn’t always the case. There was “cyberspace,” sure, but there were also “chat rooms” (another spatial reference) as well as weird artifacts like Apple eWorld that tried to represent connectivity as a traditional community – with buildings corresponding to different tasks – rather than one giant medium (“the Internet”). Even early browsers like Mosaic and Netscape Navigator had names that were spatial, representing a physical collection of objects and a guide through a cyber-landscape, respectively.

None of these modes of connectivity were strange in the 1990s. My favorite example in this mold was the Sierra On-Line adventure game “Space Quest 6: The Spinal Frontier” (hereafter “SQ6”), released, crucially, in 1995, which was the year that Netscape really began to pick up steam and Windows 95 was released. It was also, if I recall correctly, the first year I actually went online.

Space and “Space Quest 6”
SQ6, like its predecessors and most of Sierra On-Line’s games, was a point-and-click adventure, a genre that involves investigating a world, clicking on things, accumulating inventory, talking to people, and solving puzzles. Generally, the gameplay is slow-paced and intellectual. I grew up with these games in the early 90s, installing them from floppy disks, being stumped for hours on puzzles, and then having to order a hint book since GameFAQs didn’t exist yet.

Many of them also had manuals that contained crucial, proprietary hints to puzzles, as a means of copy protection. “Space Quest V: The Next Mutation,” for instance, had a tabloid that included important tips. With SQ6, there was a pack-in magazine called “Popular Janitronics” that you absolutely had to have to complete one of the game’s hardest tasks (creating a homing beacon).

Unlike, say, “King Quest’s VI: Heir Today, Gone Tomorrow” (also from Sierra), SQ6 wasn’t on the technological cutting-edge, although I thought it was at the time since it was the first “Space Quest” game to be built for CD-ROM distribution (to get a sense of how big a deal this was: KQ6 was initially available on 12 floppies, and, after a year, on 1 CD). Its graphics were ok and its gameplay standard.

The game’s hero, Roger Wilco, goes to several exotic planets on his quest to save someone named Stellar Santiago. The most memorable sequence for me, though, is when he goes into cyberspace, which looks like this:

hqdefault

superhighway

Software, networks, and hardware
These screenshots show how a lot of people in 1995 conceived of “the Internet” (which didn’t really have that label at this time; that noun with the definite article is found nowhere in the game’s dialogue or literature): vast spaces, dotted with highways that carried information and ran past virtual buildings that held online accounts and files. The file cabinet screen grab above is accessed through a menu that looks like a dead-ringer for Windows 3.1, which is itself housed inside a trailer. That’s about as cyberspace-in-early-1995 as you can get, and not far off from eWorld, albeit in a Windows-centric universe (I played the game on a Windows 95 machine).

While all of this may seem outdated now, it really isn’t. For starters, Roger goes online not by using a phone or even a PC, but by donning a VR headset that doesn’t look much different from Facebook’s Oculus Rift. Everything old is new again; recycling sci-fi and fictional ideas is both a fascinating aspect and a potential weakness of the tech industry, which has a strange reliance on the entertainment industry for ideas at times when its own poverty of imagination shows through. Plus, the idea of the Internet as a space was never “wrong,” it just lost out to the homogenization that eventually grouped disparate histories in hardware, software and networking infrastructure into one story, as Morozov pointed out:

“For most of the nineties, you still had a multiplicity of different visions, interpretations, anxieties and longings for this new world, and a lot of competing terms for it – virtual reality [note: weird how this one has survived and actually flourished in the discourse of wearable technology], hypertext, World Wide Web, Internet. At some point, the Internet as a medium overtook all of them and became the organizing metacategory, while the others dropped away. What would have changed if we had continued thinking about it as a space rather than as a medium? Questions like these are important. The Net isn’t a timeless, unproblematic category. I want to understand how it became an object of analysis that incorporates all these parallel histories: in hardware, software, state-supported infrastructures, privatization of infrastructures, and strips them of their political, economic and historical contexts to generate a typical origin story: there was an invention- Vint Cerf and DARPA – and it became this fascinating new force with a life of its own. Essentially, that’s our Internet discourse at present.”

He’s asking good questions, and I can’t wait for him to write more books on “the Internet” and its history. Since we’re talking about video games here, though, I might note, on the subject of “origin stories,” that this tendency toward a specific, linear history of “the Internet” – one that scrubs out various continuity errors or false starts – is a lot like something from a comic book or fictional universe, which makes sense. The tech industry at present has considerable overlap with geek culture, which has led it to elevate the Maker movement and the sort of artifact-obsessed outlook that loves clean origin stories rather than messy human dramas.

Wilco and conclusions
Roger Wilco never starred in another official SQ after 1995. Like the rest of Sierra’s adventure gaming franchises, which had thrived as PCs became mainstream in the 1980s and 1990s, it struggled to keep pace with new types of games that sported better graphics, more violence, and online gameplay. The solo, introverted experience of the point-and-click game was no match for attention spans with access to Unreal Tournament and, eventually, Facebook games.

With that transition in mind, it makes sense that SQ6 would see “the Internet” as a bunch of filing cabinets, or an “offline” version of Windows 3.1, for someone to dig through. The notion of the Internet as an actual medium for other people’s information, rather than a quiet library for each individual, implies a broad social connection that computers did not deliver in the mid 1990s and further back.

It’s too bad, in a way. If the Internet were conceived of a space today, think of the impact such a mindset might have on data collection and privacy – Wilco would have been overwhelmed had he stumbled across the “F” filing cabinet in that building, stuffed as it would be now with Facebook data. Or the “N” (NSA) or “U” (Uber) cabinets. Maybe it’s time to bring “cyberspace” back, if only as a semantic nod to there being real consequences for data collection and online screeds.

The Internet and The Physical World

10700194_10100554021493291_7519051451388774533_o

A 2004 album reimagined for the iPhone 6 Plus lock screen

Look out: Death From Above
Last year, Canadian band Death From Above 1979 (their name, if you’re curious, was created at the last minute so as to dodge legal action from DFA Records) released a record called “The Physical World.” It came 10 years after their only other record, 2004’s “You’re a Woman, I’m a Machine.” In the intervening years, I had attended college, moved from Providence to Chicago and gone through a slew of jobs en route to my current gig. The band didn’t know these facts, of course; the record sounds like it could have been recorded back during that same autumn as the debut, when George W. Bush was facing off against John Kerry in the U.S. presidential election.

In 2004, if I wanted to explore music, I would take the 30 minute walk from my dorm to the Newbury Comics in the city mall. Web services like Ares were available for downloading MP3s for free, but I didn’t want to risk it on the university network.  I saw a lone copy of “You’re a Woman, I’m a Machine” one day and picked it up, having really only heard the band’s name on Pitchfork, not intending to buy in when I went down there, and only nudged into doing so by seeing it at that moment.

By 2014, this mix of ritual – the walk downtown with iPod in tow – and impulsiveness seems ancient. Finding “The Physical World” on the Internet, legally or otherwise, takes seconds. The only chance to “bump into” it, like one would in a record store, is now limited to seeing in a YouTube sidebar or having it come in after many other similar-sounding songs on a socially curated Spotify playlist.

If nothing else, the Internet – if there is any really single, organic “Internet,” rather than just an amalgam of the globe-spanning properties of American companies like Google and Facebook, bankrolled by advertising dollars and venture capital, and threatening professional death from above for publishers and artists everywhere – has in such ways offered to replace many of our social experiences with what basically amount to simulations. Often, words like “easy,” “convenient,” and “at your fingertips” justify the change – don’t walk to the record store, here’s everything Death From Above 1979 have ever recorded, right at your fingertips!

“Social”: What came after 1979
But how social is the Internet? The question comes off as both tone-deaf (where have you been during the last 10+ years of social media?) and Ted Stevens-y (he once called the Internet a series of tubes, which was widely lampooned but accurate in a strange way). The social dimension of the Internet – its impact on conversations, sharing, etc. – seems undeniable.

I recently listened to the first episode of the podcast “Upvoted,” from reddit, the self-proclaimed front page of the Internet. The story was about a man, named Dante, who had gone to prison for drug offenses, getting a much shorter sentence than he expected after a right-wing judge presiding over his proceedings was injured and replaced by a Clinton appointee. During his time in prison, he mastered drawing and sometimes sketched out what an iPhone looked like for prisoners who had been incarcerated so long that their last experiences with a computer was via Windows 95.

Near the end of the podcast, one of Dante’s friends talked about how justice was not meted out equally, not only across demographics but across Internet users. He asserted that kids who were less social and who didn’t have a lot of friends but instead hung out all day on the Internet were somehow at greater risk of punishment. I thought:

  • Isn’t the entire Internet “social?” Isn’t that what has driven so many startups to record-setting valuations and fueled the ambitions of Facebook to connect every last person on the world to a website? Isn’t its difference from the physical world the notion that anyone and everyone is just a tap away, rather than cordoned-off from communications or in a faraway place? Isn’t the presence of these so-called awkward kids on a website like reddit (of all places) just the digital version of an analog community (to use a stupid digital dualism crutch) and somewhat of a problem for labeling these people as “not social”?
  • What if, though, that guy from the podcast was right, that whatever “social” experience the Internet was ultimately providing wasn’t ultimately an equivalent of, nor a replacement for, what had come before in terms of “social” – the in-person social activities, or even the private rituals like record buying? What if the Internet had just as much reinforced the positions of the naturally sociable (in much the same way that it has come to entrench huge corporations, the top 1 percent of music artists, and millionaires and billionaires more generally) as it had given introverts/shy nerds/whatever label you like more freedom? What if all of the Internet’s activities really were just simulations that couldn’t overcome issues like inequity in justice?
Screen Shot 2015-01-14 at 10.43.40 PM

“Upvoted,” from that same lock screen

The 1979 in Death From Above 1979’s name is the year before the Millennials generation is generally agreed upon to begin. People born from 1980 onward came of age at the same time as any number of Internet-reliant technologies. For me, born in 1986, it was the Web browser, which came into its own when I was about 10 years old, paving the way for social networks just a few years later.

The first social network I used was naturally MySpace, then Facebook in July 2004, not long before “You’re a Woman, I’m a Machine” came out. I guess I’m one of the earliest users of Facebook and that I’ve explored its feature more than most (e.g., using Skype to see the entire News Feed, not just the EdgeRank-filtered results). All of this expertise and experience has done nothing to make me a “social” person in the physical world (“real life,” I guess, though I don’t like that phrase since it has so much baggage). My time on Facebook, in other words, hasn’t given me the social high or prestige that I would need to avoid what that one podcast speaker had deemed the demographic disadvantage of shy, Internet-addicted kids.

Everything on Facebook isn’t really the physical me. I don’t make long speeches in person that are equivalent to my Facebook comments. I don’t leer at faces the ways I stare at images. I don’t try to find out what news articles, lists, and videos someone at the restaurant I’m in is interested in. I don’t have anything resembling a “network” (in recruiter-speak) of actual, contactable people that maps to my list of Facebook “friends.”

The same mostly holds for reddit. Reading posts in the Bitcoin and Nintendo subreddits are ways to waste time rather than reflections of what I really think about when I’m out walking or in bed. I would never make some of the comments I had made were the interlocutor standing in front of me (this is the tragedy of Internet comments, which are still good for something though).

You’re a Man, I’m an Internet Social Network
For someone who is not naturally social or sociable, the Internet – in this case, social media sites and forums like the ones discussed here – can be dispiriting. It’s possible to make new friends or relationships on the Internet (I met my spouse this way after all) but it’s also possible to have a good email exchange or emailed job application torpedoed once other forms of communication – a phone call or meet-up – enter the picture. The latter example deserves a post all of its own, but I’ll just say that Internet job postings paradoxically give everyone and no one a chance – volume is often so high that candidates who have put in more legwork in the physical world – met the right people, gone to the right seminars – are best differentiated.

Likewise, having scores of LinkedIn contacts or Facebook friends doesn’t necessarily give one an advantage in physical world situations in which cronyism, who-do-you-know, it’s-always-been-like-this, and you-can’t-sit-with-us still rule the day. And then there’s the way in which a friend’s Facebook photo at some famous monument makes us feel like we’re missing out (on physical activities and places, mostly), or some listicle about how we all need to be more “spontaneous” (i.e., insane), which of course would require a lot of activity beyond just being on the Internet all day – despite its often-cited deep “social” character.

It feels like the Internet is still a poor map of the physical world and many of the behaviors – secret meetings, hard labor, conversations that involve more than texts and “…” [this person is typing] balloons – that made it the way it was. This includes even “inefficient” processes like walking to some store to buy a Death From Above 1979 album (or, even further back, a copy of Windows 95!) – the time I spent doing that is now “saved” so that I can just waste it straight away on BuzzFeed or getting to the top of the Twitter stream. Moreover, by only giving us, in most cases (not all), simulations, it really can subtly weaken people who aren’t predisposed to being social, by giving them the illusion that they can change (“disrupt” would be the cliché word choice here) things and get ahead, when they’d probably have a better chance of doing so by just taking a walk outside and buying whatever they wanted to.

I can still listen to “You’re a Woman, I’m a Machine” anywhere I go, just as I can do with “The Physical World.” If I hadn’t had the longwinded physical world experience of the former, though, who knows if the band or album would be special to me at all a decade later, or if I would have taken the 30 minutes to write this…

What Ethan Zuckerman overlooked in his Atlantic piece about Web ads

Entitling an article “The Internet’s Original Sin” is pretentious, but I’m guessing that it is an Atlantic editor’s attempt at sounding weighty while driving traffic on behalf of the publication’s ads. Irony of reading Ethan Zuckerman’s post about the consequences of Web ads aside, the author makes a compelling case that reliance of websites and social media on advertising has had unsavory side effects. The most notable is heightened surveillance as Facebook, Google et al try to discover more about who uses their services so that they can better target their ads.

Web advertising has been a vital revenue stream for big businesses and small-time website owners alike for roughly 20 years. Yahoo, Google, and Facebook were all built atop ad-supported monetization that is frequently annoying and irrelevant. Even sites like this one run ads that readers likely have little use for. Ads, in addition to the money they bring in, are good reminders that for all the incessant talk of “innovation,” that many of the Web’s biggest players have a business model not all that different from 1950s broadcast televison. People have sat through commercials for everything from Kool-Aid to Budweiser while watching TV, and now they endure sponsored content (i.e., highbrow informercials) and sidebar ads for AT&T and Groupon.

Zuckerman proposes fees that would support Web properties while removing the baggage that comes with ads. There are plenty of examples of such an approach, including Pinboard (a fee that increases fractionally for each new user), Zoho’s various services (including its ad-free webmail), and Pocket (annual subscription). Of course, paying for things upfront is a very “analog” thing to do, seen as out-of-step with the freemium economics of “digital” media. Hearing at least one prominent voice speak out for the return of Paying For Things and be applauded as forward-looking for doing so speaks volumes about the highly political, neoliberal construct commonly referred to as “the Internet.”

When many individuals talk about “the Internet,” they aren’t talking about basic IP connectivity and moreover they’re not talking about a medium in the same sense that ones speaks of “television” or “radio,” both of which are treated basically as dumb conduits for content and programming. No, the Internet is a whole suite of ideas about Whig history and neoliberal economics, one that is almost always referred to positively as a non-human champion for progress. Even its flaws – surveillance, ads – are seen as the morally wrong actions of individuals trying to ruin an objectively good thing. It’s absurd to think of talking about any other communications medium this way – no one is going to write about the original sin of TV or how radio is disrupting X or Y. Those media aren’t regarded as singular forces. 

I have long wondered why this was the case. Was “the Internet” really unique? It’s essentially an extension of technologies dating back to the telegraph, and its impact on human welfare is less than that of humble inventions such as the washing machine. But I was overlooking the obvious answer: “the Internet” is an enormous revenue opportunity for the private sector, particularly Silicon Valley. This sentence from Zuckerman’s piece resonated:

“Most investors know your company won’t grow to have a billion users, as Facebook does. So you’ve got to prove that your ads will be worth more than Facebook’s”

Nothing wrong with this sentence. It’s a great breakdown of the weird pressures currently shaping monetization on the Web. But did you notice something odd about this sentence, and about most of the article? It’s exclusively about private services stewarded by for-profit corporations. It’s almost as if the only organizations that exist are startups, and that issues with “the Internet” are moral rather than political.

It seems taboo to talk about the possibility of, say, a public and free equivalent of Facebook, Reddit, or Google. It’s cliche to refer to “the Internet” as the largest library ever, but it’s really not, at least not in its heavily politicized state in 2014. Libraries are generally run for the public good, or for the benefit of a smaller group of people (university students and professors) who have subsidized in other settings and can utilize it as a space for thinking, without seeing ads everywhere or trading data for personalized book recommendations. In contrast, “The Internet” is a cash machine for the private sector. Likewise, “the Internet” isn’t akin to an essential utility like electricity or water for similar reasons, plus it’s used mostly for leisure (another indication of the level of value it contributes to society).

It seems short-sighted to propose an end to free/privatized services so that we can have paid/privatized services, as if these two business models were all there were to the Web. Since the Web is so often used to look up information and instituted as a human right (absurdly, I think, but that’s another conversation), why not treat it like water or electricity or any of the other essentials that it is compared to when speaking of “the Internet”? Why not make it a public library? Right, because there’s too much money at stake, and so much political power rides upon treating “the Internet” as an all-powerful force best left to the private sector. In the West, we’ve been knee-deep in neoliberalism so long that it’s hard to realize that inquiry really could extend beyond how we pay for things and instead take up the questions of who benefits, and should they.