In late 2013, we watched our first full-length silent film, “The Thief of Baghdad” from 1924, starring Douglas Fairbanks. Fast-moving with an endlessly engaging score (a loop of “Scheherazade” by Rimsky-Korsakov), it’s a good “break-in” film for anyone unfamiliar with the silent era. Fairbanks excelled at swashbuckling roles, and “The Thief of Baghdad” is one of the swashbucklingest movies ever made. He dances around with his scimitar and dives into the sea to fight off monsters, too.
Since that time, we have explored a few other silent era films, including the corpus of Kentucky director and Hollywood godfather D.W. Griffith. I recently finished his “Intolerance,” from 1916, the follow-up to 1915’s blockbuster “The Birth of a Nation.” The latter rewrote the rules for feature-length films by being essentially the first feature-length film, with a continuous narrative structure documenting the before, after and during of the American Civil War. “Intolerance,” though less famous, may be Griffith’s best work.
I have always liked the idea of split stories and parallel action; “Intolerance” provides nothing but for epic 3+ hour duration. There are four stories, each documenting a moment in history when intolerance of other belief systems or moral codes was the preamble to violence: there’s an ancient Babylonian story about the city attack by Cyrus the Great, a Judean story about Jesus, a French story about the St. Bartholomew’s Day Massacre, and an American modern story about a mill strike and a group of, well, intolerable moralists.
The variety of “Intolerance” makes its epic running time go by swiftly. Griffith employs many different color prints, a melange of musical samples, and some strange interstitial techniques like a woman rocking a baby in a cradle (representing the passage of time between the film’s chosen eras) and a background shot that includes what looks like the script/screenplay for “Intolerance” itself – how meta. Textual snippets are also given period-specific cards, such a tablet for the Babylonian story.
“Intolerance” is 99 years old this year, but perhaps because of its cosmopolitan subject matter it seems less dated than “The Birth of a Nation,” which represented and embraced the retrograde racial attitudes of its period. Another thing that makes “Intolerance” seem so modern is it ambition. The budget ran well into the millions of USD – in 1916! The sets, such as the Babylonian city that Cyrus besieges, are sprawling and look great almost a century on – behind the color-tinted shots and film crackles, they now seem as old as the times they tried to depict.
Some of the film’s imagery and topics, especially in the Babylonian scene, remain relevant for 21st century viewers. The issue of whose god is mightier – Bel-Marduk or Ishtar – and the shots of people falling to their deaths while large (siege) towers topple has uncomfortable symmetry with 9/11, for instance.
Part of what is so striking to me now, though, about “Intolerance” and silent films in general, is how “Internet”-like the entire experience is. There’s the variable pacing of moving from one card to the next and reading the text, just like one would do with a webpage (with the important and obvious difference of not being in control of the direction – although one could say that people addicted to Facebook or forum arguments are hardly free from inertia in this regard…). There is the card-by-card, shot-by-shot attention to design and layout (“Intolerance” even has footnotes for some of its textual snippets!) as well.
Earlier this year, I wrote about “the Internet” is a term applied retroactively to a bunch of actually separate histories – networking, software, hardware, etc. – with the added current connotation as a medium through which its users receive information. It used to be called by different names – “cyberspace” is perhaps the best example of this class of outmoded labels, as it conceives of connectivity as a space rather than a medium – and really if one wants to get technical, the vague principles of “the Internet” go all the way back to the telegraph, which was a much bigger break with what came before it than, say, TCP/IP was with its predecessors.
Before watching “Intolerance,” I hadn’t though of silent film as a part of “Internet history.” But the design tropes of silent film are if anything becoming more, not less, prevalent in media. Pushing cards or snippets of content – say, Snapchat Discover, Twitter’s “While You Were Away” feature, or the stream of matches on an app like Tinder – is an essential mechanism for many of today’s mobile services in particular. Integration of video with services like Meerkat (it lets one show live video to her Twitter followers) only makes the lineage from silent film to “the Internet” more apparent.
In a way, “the Internet” hasn’t even caught up to the immersive experience of silent films, which often not only pushed discrete cards and pieces of narration at viewers (ironically, to support a continuous narrative) but also featured live orchestras in grand settings. Videoconferencing (FaceTime, Skype) and the likes of Snapchat are Meerkat strive for that same immediacy that Griffith et al captured in the 1910s.
One more intersection: For someone used to talking movies, watching a silent film can feel really lonely, because no one is talking. For me, this exact sort of silence and proneness to becoming lost in thought – for better or worse – is endemic to using “the Internet.” It’s strange, really, that in an extroverted society like the U.S., in which silence is barely tolerated in meetings etc., that so much mental energy is channeled into the inaudible actions of responding to emails or skimming BuzzFeed. I would much rather wordlessly watch “Intolerance” again.
Nouns and Greek texts
Looking back at elementary school, the earliest thing I remember learning was what a noun was. “A person, place, or thing” – that seems to cover all the bases. It’s the type of knowledge that quickly becomes secondhand, only coming to mind in cases like interpreting a sentence that contains a gerund, which is an English nouns that seem like a verb (e.g., “the happening is up ahead”).
Sixteen years after I learned what a noun was, I started reading Aristotle in Greek. Although Aristotle exerts tremendous influence on all of Western civilization – in every field from biology (which he started with his examinations of specimens brought to him by Alexander the Great) to theater criticism – I have never loved his ideas or stylistic flourishes as much as those of his teacher, Plato.
Some of his Greek texts seemed rough to me, requiring a lot of insertion of English words in the translation, whereas Plato’s writing was full of plays on words and syntactical arrangements that made it enjoyable in ways that English couldn’t reproduce. When translating, I felt like sometimes English was an upgrade for Aristotle, while it never was for Plato.
Nouns and sounds: Nounds?
I began reading Aristotle’s “On Interpretation” today, my first real brush with his work since 2007, when I was working with the “Nichomachean Ethics.” It won’t take me too long to finish, which is exciting, having recently read almost nothing but long philosophical tracts and novels.
Early on, Aristotle, like an elementary school teacher, sets the grounds rules by defining what he means by a noun. He says:
“By a noun we mean a sound significant by convention, which has no reference to time, and of which no part is significant apart from the rest.”
I don’t have the Greek text with me (I’ll try to find an image of it later) but isn’t it strange that a noun is defined as a sound? Obviously, nouns are also written, soundlessly, on paper and word processors, but, as Aristotle notes, “written words are the symbols of spoken words.” It all comes back to speech.
Sounds and good and bad writing
This makes sense when you start to think about bad writing, more so than good writing. So much bad writing and so many bad ideas emerge because they have no predecessors in speech and would sound close to nonsense if spoken aloud. I’m thinking of all that business writing about “full-service solutions providers.” Jason Fried tore into it several years ago for Inc.:
“One of my favorite phrases in the business world is full-service solutions provider. A quick search on Google finds at least 47,000 companies using that one. That’s full-service generic. There’s more. Cost effective end-to-end solutions brings you about 95,000 results. Provider of value-added services nets you more than 600,000 matches. Exactly which services are sold as not adding value?.”
All of these phrases sound horrible in conversation – even the people who write them wouldn’t utter them aloud in relaxed company. It’s like there’s nothing there; encountering the word “solutions” in text makes me instantly skip like 2 or 3 lines ahead to see if things get better. There may as well be no nouns on the page.
Aristotle is helpful here, too, in a strange way:
“[N]othing is by nature a noun or name – it is only so when it becomes a symbol; inarticulate sounds, such as those which brutes produce, are significant, yet none of these constitutes a noun.”
It’s a weird image that comes to mind for me here, as I equate brutes raving inarticulately with business writers ranting about best-of-breed management structures in ghostwritten columns or ‘touching base’ in their emails. What counts as “inarticulate,” though? A liberal interpretation, I suspect, could capture so much that is bad and nebulous about writing, particularly writing about technology.
Some terms, like “the Internet,” have become so vast as to be meaningless without first trying to figure out what they’re not – what is the Internet not, when it comes to technology? As I noted a few posts ago, the term has come to bind together software, hardware, networks, and many other disparate technologies into a homogenous term.
If it’s not everything, then it’s trying to become so by incorporating every device possible, through the “Internet of Things.” Sensors, “analytics,” and, yep, valued-added services all pile into conversations about this term: All I know is that trying to write about “the Internet of Things” makes me sound like an inarticulate brute.
Look out: Death From Above
Last year, Canadian band Death From Above 1979 (their name, if you’re curious, was created at the last minute so as to dodge legal action from DFA Records) released a record called “The Physical World.” It came 10 years after their only other record, 2004’s “You’re a Woman, I’m a Machine.” In the intervening years, I had attended college, moved from Providence to Chicago and gone through a slew of jobs en route to my current gig. The band didn’t know these facts, of course; the record sounds like it could have been recorded back during that same autumn as the debut, when George W. Bush was facing off against John Kerry in the U.S. presidential election.
In 2004, if I wanted to explore music, I would take the 30 minute walk from my dorm to the Newbury Comics in the city mall. Web services like Ares were available for downloading MP3s for free, but I didn’t want to risk it on the university network. I saw a lone copy of “You’re a Woman, I’m a Machine” one day and picked it up, having really only heard the band’s name on Pitchfork, not intending to buy in when I went down there, and only nudged into doing so by seeing it at that moment.
By 2014, this mix of ritual – the walk downtown with iPod in tow – and impulsiveness seems ancient. Finding “The Physical World” on the Internet, legally or otherwise, takes seconds. The only chance to “bump into” it, like one would in a record store, is now limited to seeing in a YouTube sidebar or having it come in after many other similar-sounding songs on a socially curated Spotify playlist.
If nothing else, the Internet – if there is any really single, organic “Internet,” rather than just an amalgam of the globe-spanning properties of American companies like Google and Facebook, bankrolled by advertising dollars and venture capital, and threatening professional death from above for publishers and artists everywhere – has in such ways offered to replace many of our social experiences with what basically amount to simulations. Often, words like “easy,” “convenient,” and “at your fingertips” justify the change – don’t walk to the record store, here’s everything Death From Above 1979 have ever recorded, right at your fingertips!
“Social”: What came after 1979
But how social is the Internet? The question comes off as both tone-deaf (where have you been during the last 10+ years of social media?) and Ted Stevens-y (he once called the Internet a series of tubes, which was widely lampooned but accurate in a strange way). The social dimension of the Internet – its impact on conversations, sharing, etc. – seems undeniable.
I recently listened to the first episode of the podcast “Upvoted,” from reddit, the self-proclaimed front page of the Internet. The story was about a man, named Dante, who had gone to prison for drug offenses, getting a much shorter sentence than he expected after a right-wing judge presiding over his proceedings was injured and replaced by a Clinton appointee. During his time in prison, he mastered drawing and sometimes sketched out what an iPhone looked like for prisoners who had been incarcerated so long that their last experiences with a computer was via Windows 95.
Near the end of the podcast, one of Dante’s friends talked about how justice was not meted out equally, not only across demographics but across Internet users. He asserted that kids who were less social and who didn’t have a lot of friends but instead hung out all day on the Internet were somehow at greater risk of punishment. I thought:
- Isn’t the entire Internet “social?” Isn’t that what has driven so many startups to record-setting valuations and fueled the ambitions of Facebook to connect every last person on the world to a website? Isn’t its difference from the physical world the notion that anyone and everyone is just a tap away, rather than cordoned-off from communications or in a faraway place? Isn’t the presence of these so-called awkward kids on a website like reddit (of all places) just the digital version of an analog community (to use a stupid digital dualism crutch) and somewhat of a problem for labeling these people as “not social”?
- What if, though, that guy from the podcast was right, that whatever “social” experience the Internet was ultimately providing wasn’t ultimately an equivalent of, nor a replacement for, what had come before in terms of “social” – the in-person social activities, or even the private rituals like record buying? What if the Internet had just as much reinforced the positions of the naturally sociable (in much the same way that it has come to entrench huge corporations, the top 1 percent of music artists, and millionaires and billionaires more generally) as it had given introverts/shy nerds/whatever label you like more freedom? What if all of the Internet’s activities really were just simulations that couldn’t overcome issues like inequity in justice?
The 1979 in Death From Above 1979’s name is the year before the Millennials generation is generally agreed upon to begin. People born from 1980 onward came of age at the same time as any number of Internet-reliant technologies. For me, born in 1986, it was the Web browser, which came into its own when I was about 10 years old, paving the way for social networks just a few years later.
The first social network I used was naturally MySpace, then Facebook in July 2004, not long before “You’re a Woman, I’m a Machine” came out. I guess I’m one of the earliest users of Facebook and that I’ve explored its feature more than most (e.g., using Skype to see the entire News Feed, not just the EdgeRank-filtered results). All of this expertise and experience has done nothing to make me a “social” person in the physical world (“real life,” I guess, though I don’t like that phrase since it has so much baggage). My time on Facebook, in other words, hasn’t given me the social high or prestige that I would need to avoid what that one podcast speaker had deemed the demographic disadvantage of shy, Internet-addicted kids.
Everything on Facebook isn’t really the physical me. I don’t make long speeches in person that are equivalent to my Facebook comments. I don’t leer at faces the ways I stare at images. I don’t try to find out what news articles, lists, and videos someone at the restaurant I’m in is interested in. I don’t have anything resembling a “network” (in recruiter-speak) of actual, contactable people that maps to my list of Facebook “friends.”
The same mostly holds for reddit. Reading posts in the Bitcoin and Nintendo subreddits are ways to waste time rather than reflections of what I really think about when I’m out walking or in bed. I would never make some of the comments I had made were the interlocutor standing in front of me (this is the tragedy of Internet comments, which are still good for something though).
You’re a Man, I’m an Internet Social Network
For someone who is not naturally social or sociable, the Internet – in this case, social media sites and forums like the ones discussed here – can be dispiriting. It’s possible to make new friends or relationships on the Internet (I met my spouse this way after all) but it’s also possible to have a good email exchange or emailed job application torpedoed once other forms of communication – a phone call or meet-up – enter the picture. The latter example deserves a post all of its own, but I’ll just say that Internet job postings paradoxically give everyone and no one a chance – volume is often so high that candidates who have put in more legwork in the physical world – met the right people, gone to the right seminars – are best differentiated.
Likewise, having scores of LinkedIn contacts or Facebook friends doesn’t necessarily give one an advantage in physical world situations in which cronyism, who-do-you-know, it’s-always-been-like-this, and you-can’t-sit-with-us still rule the day. And then there’s the way in which a friend’s Facebook photo at some famous monument makes us feel like we’re missing out (on physical activities and places, mostly), or some listicle about how we all need to be more “spontaneous” (i.e., insane), which of course would require a lot of activity beyond just being on the Internet all day – despite its often-cited deep “social” character.
It feels like the Internet is still a poor map of the physical world and many of the behaviors – secret meetings, hard labor, conversations that involve more than texts and “…” [this person is typing] balloons – that made it the way it was. This includes even “inefficient” processes like walking to some store to buy a Death From Above 1979 album (or, even further back, a copy of Windows 95!) – the time I spent doing that is now “saved” so that I can just waste it straight away on BuzzFeed or getting to the top of the Twitter stream. Moreover, by only giving us, in most cases (not all), simulations, it really can subtly weaken people who aren’t predisposed to being social, by giving them the illusion that they can change (“disrupt” would be the cliché word choice here) things and get ahead, when they’d probably have a better chance of doing so by just taking a walk outside and buying whatever they wanted to.
I can still listen to “You’re a Woman, I’m a Machine” anywhere I go, just as I can do with “The Physical World.” If I hadn’t had the longwinded physical world experience of the former, though, who knows if the band or album would be special to me at all a decade later, or if I would have taken the 30 minutes to write this…
Entitling an article “The Internet’s Original Sin” is pretentious, but I’m guessing that it is an Atlantic editor’s attempt at sounding weighty while driving traffic on behalf of the publication’s ads. Irony of reading Ethan Zuckerman’s post about the consequences of Web ads aside, the author makes a compelling case that reliance of websites and social media on advertising has had unsavory side effects. The most notable is heightened surveillance as Facebook, Google et al try to discover more about who uses their services so that they can better target their ads.
Web advertising has been a vital revenue stream for big businesses and small-time website owners alike for roughly 20 years. Yahoo, Google, and Facebook were all built atop ad-supported monetization that is frequently annoying and irrelevant. Even sites like this one run ads that readers likely have little use for. Ads, in addition to the money they bring in, are good reminders that for all the incessant talk of “innovation,” that many of the Web’s biggest players have a business model not all that different from 1950s broadcast televison. People have sat through commercials for everything from Kool-Aid to Budweiser while watching TV, and now they endure sponsored content (i.e., highbrow informercials) and sidebar ads for AT&T and Groupon.
Zuckerman proposes fees that would support Web properties while removing the baggage that comes with ads. There are plenty of examples of such an approach, including Pinboard (a fee that increases fractionally for each new user), Zoho’s various services (including its ad-free webmail), and Pocket (annual subscription). Of course, paying for things upfront is a very “analog” thing to do, seen as out-of-step with the freemium economics of “digital” media. Hearing at least one prominent voice speak out for the return of Paying For Things and be applauded as forward-looking for doing so speaks volumes about the highly political, neoliberal construct commonly referred to as “the Internet.”
When many individuals talk about “the Internet,” they aren’t talking about basic IP connectivity and moreover they’re not talking about a medium in the same sense that ones speaks of “television” or “radio,” both of which are treated basically as dumb conduits for content and programming. No, the Internet is a whole suite of ideas about Whig history and neoliberal economics, one that is almost always referred to positively as a non-human champion for progress. Even its flaws – surveillance, ads – are seen as the morally wrong actions of individuals trying to ruin an objectively good thing. It’s absurd to think of talking about any other communications medium this way – no one is going to write about the original sin of TV or how radio is disrupting X or Y. Those media aren’t regarded as singular forces.
I have long wondered why this was the case. Was “the Internet” really unique? It’s essentially an extension of technologies dating back to the telegraph, and its impact on human welfare is less than that of humble inventions such as the washing machine. But I was overlooking the obvious answer: “the Internet” is an enormous revenue opportunity for the private sector, particularly Silicon Valley. This sentence from Zuckerman’s piece resonated:
“Most investors know your company won’t grow to have a billion users, as Facebook does. So you’ve got to prove that your ads will be worth more than Facebook’s”
Nothing wrong with this sentence. It’s a great breakdown of the weird pressures currently shaping monetization on the Web. But did you notice something odd about this sentence, and about most of the article? It’s exclusively about private services stewarded by for-profit corporations. It’s almost as if the only organizations that exist are startups, and that issues with “the Internet” are moral rather than political.
It seems taboo to talk about the possibility of, say, a public and free equivalent of Facebook, Reddit, or Google. It’s cliche to refer to “the Internet” as the largest library ever, but it’s really not, at least not in its heavily politicized state in 2014. Libraries are generally run for the public good, or for the benefit of a smaller group of people (university students and professors) who have subsidized in other settings and can utilize it as a space for thinking, without seeing ads everywhere or trading data for personalized book recommendations. In contrast, “The Internet” is a cash machine for the private sector. Likewise, “the Internet” isn’t akin to an essential utility like electricity or water for similar reasons, plus it’s used mostly for leisure (another indication of the level of value it contributes to society).
It seems short-sighted to propose an end to free/privatized services so that we can have paid/privatized services, as if these two business models were all there were to the Web. Since the Web is so often used to look up information and instituted as a human right (absurdly, I think, but that’s another conversation), why not treat it like water or electricity or any of the other essentials that it is compared to when speaking of “the Internet”? Why not make it a public library? Right, because there’s too much money at stake, and so much political power rides upon treating “the Internet” as an all-powerful force best left to the private sector. In the West, we’ve been knee-deep in neoliberalism so long that it’s hard to realize that inquiry really could extend beyond how we pay for things and instead take up the questions of who benefits, and should they.