One afternoon in 2005, I went to eat lunch with a friend at one of my college’s cafeterias. That dining hall normally played music over its speaker system, but that day I couldn’t hear anything, although I remarked that here and there I thought I heard a faint sound. My friend quipped “They’re playing ‘Laughing Stock.‘”
Now, of course they weren’t playing the seminal 1991 album by British rock group Talk Talk. The joke worked, though, because “Laughing Stock” is a famously amorphous album, with loose song structures that alternate between near-silence and raucous blues-influenced jamming. It opens with a solid 15 seconds of guitar amp feedback, meaning that basically any electronic hum, be it from a speaker system or something other source, could conceivably be mistaken for a moment as the intro to “Myrrhman.”
Here’s a representative selection from the album:
I’m thinking about “Laughing Stock” because Talk Talk’s lead singers and guitarist (and multi-instrumentalist), Mark Hollis, recently passed away. He was only 64.
If you’re not famliar with Talk Talk, they had a remakrable career arc that saw them go from New Wave popsters to one of the pioneers of a genre now known as post-rock, though it didn’t have that name back in the group’s late 80s/early 90s peak.
When I was in my late teens/early 20s, Talk Talk’s progression capitvated me. I was at the time obsessed with “tortured genius” types and perfectionists, perhaps because I was struggling so mightily with my coursework and felt overwhelmed. I found strange solace in artists who had obviously labored with their art – people like the French novelist Gustav Flaubert, and definitely Hollis et al. in Talk Talk.
Many retrospectives on “Laughing Stock” and its predecessor, “Spirit of Eden,” often discuss the process of making them as much as the actual music they contain. The band meticulously created these mystical environments in the studio space, in part to recapture what the thought were the magic conditions under which late 1960s albums like Traffic’s “Mr. Fantasy” were produced.
“Laughing Stock” the album was heavily edited down from its session recordings, with tons of discarded material. Even though it was meant to have a “live” feel, like a jazz ensemble playing together, it is in reality the exact opposite, the product of endless post-production tweaking. It was impossible to perform on tour, so the band didn’t try.
The perceived difficulty of “Laughing Stock” is key to its legend, but I find it very listenable. Take for example “After the Flood, which I remember spinning on a turntable during the last days in my apartment on Pulaski Road in Chicago, where I had stayed for two years through long stretches of unemployment and part-time work and was, as I listend to that song, finally on the verge of moving out of and starting my first full-time job.
There’s a simplicity and a space to it that makes it so listenable.
I still own that vinyl, plus a CD copy I picked up in the early 2000s at the now-defunct Lousiville record store Ear X-Tacy. The album cover, with endangered birds forming shapes of the continents, is iconic.
Years before they produced “Laughing Stock” and “Spirit of Eden,” Talk Talk had a similar sound to Duran Duran. They scored a decent hit with “It’s My Life,” which No Doubt covered in 2002. Their early work holds up well, I think; their story isn’t one of going from the “low” art of their synth-pop singles to the “high” art of post-rock, since I’m not sure such distinctions really matter. They created consistently enjoyable work and it’s too bad that we never heard much from them after 1991 other than Hollis’ self-titled 1998 album, which was so meticulously recorded you can hear him moving in his chair on one song.
RIP Mark Hollis
The recent death of George H.W Bush made me reflect on the now-oldest living president, Jimmy Carter. The 39th president, as a president, is not held in high regard by nearly anyone; the GOP remembers him as a weakling who ranted about “malaise” en route to getting routed by Ronald Reagan in 1980, while today’s Democrats see him as either a weirdly conservative holdover of the pre-LBJ Democratic Party or the first “neoliberal” president.
Carter is a political orphan, and almost everything about his rise to power and time in office are strange in retrospect. First, consider the electoral map of the 1976 election:
No Democrat before or since has won with this weird coalition of states. Carter traded big states in the Northeast and Midwest with Ford, winning Ohio, Pennsylvania, and New York but losing Illinois, Michigan, and New Jersey. Shut out in the West except for Hawaii, he held on by sweeping the old Solid South except for Virginia. He remains the last Democratic presidential nominee to win Texas, Alabama, Mississippi, or South Carolina.
Carter’s relatively narrow victory stemmed from the lingering stench of Watergate that followed Ford, as well as Carter’s innovative stance as an outsider, a Southern governor who could clean up Washington. Before Carter, the most recent governor to ascend to the presidency was FDR, and the three most recent presidents had been consummate insiders with decades-long careers in federal politics. After Carter, three of the next four presidents were governors who ran as outsiders. Ronald Reagan and George W. Bush benefited immensely from the evangelical vote that Carter, perhaps the last sincere Christian to hold the office, activated; Bush 43 in particular won two narrow Carter-esque victories thanks to incredible strength throughout the South.
As a president, Carter was remarkably blunt. His famous cardigan address, delivered by a fireplace, includes him smirking dismissively at people who doubted his view that America needed to get better at energy conservation. His crisis of confidence speech is a bit of truth-telling that no president had since even tried to replicate. People praise Donald Trump for “telling it like it is,” which he doesn’t, but Carter really did and was crucified for it. Pundits and politicians still act like they idolize Carterist bluntness, but they only like the idea of it and would never take the risks Carter himself did when discussing energy or public policy solo on national tv.
Carter entered office at the apex of the Democratic Party’s post watergate dominance. However, he struggled to govern, famously pissing off Ted Kennedy with his austere inauguration party and clashing with progressives who pushed legislation like Humphrey-Hawkins, a bill that would have guaranteed employment to every American adult.
His few accomplishments were nevertheless notable. He deregulated airlines, trucking, and railroads, the consequences of which we are still living with (this is what leftist critics refer to when they call Carter a neoliberal shill), since Reagan and every subsequent president took a similar approach to industry. He appointed Federal Reserve chairman Paul Volcker, who jacked up interest rates, hurting workers and winning the admiration of Ronald Reagan, who retained him in the 1980s. The New Deal era ended during the Carter presidency.
In foreign policy, he established the Carter Doctrine, which set the table for decades of US intervention in the Middle East, especially under the presidents Bush. He ramped up military spending to pressure the USSR, after attacking Ford from the right in 1976 for his “no Soviet domination of Eastern Europe” gaffe during a debate. Military spending has been spiraling upward ever since.
Carter’s bluntness, political limitations, and challenges with issues in the world at large (e.g., the energy crisis) doomed his re-election prospects. But his influence lives on, as every president since, with the slight exception of Obama, has taken a similar approach to military aggression abroad and deregulatory policy at home. The incoherence of the Trump presidency may finally signal the beginning of the end for the current era of presidential politics that began in the late 1970s, but until a progressive Democrat takes office and takes the country on a distinctly different course the contours of the Carter era are still with us.
Years ago, I joined the conversation about whether video games constitute “art.” The late Roger Ebert spawned a thousand hot takes by refusing to classify them as such, arguing that their winnability set them aside from classical art forms that cannot be won or lost, only experienced. I wrote this on the subject almost five years:
“Classic [Nintendo Entertainment System, hereafter “NES”] and [Super Nintendo Entertainment System, hereafter “SNES”] games are nowadays mostly playable only via emulation. Imagine if you could only watch The Thief of Baghdad or The Birth of a Nation by “emulating” (or actually using!) an early 20th century era projector and screen. Of course, that isn’t the case – you can watch either one on an device that has Netflix on it. Similarly, imagine if the works of Shakespeare could only be read on 17th century folio paper and were essentially illegible on anything printed after that time. Such a reality would be absurd, but it’s basically the issue that plagues video games: their greatness, with precious few exceptions, isn’t transferrable across eras.”
If you are not a frequent gamer, allow me to take a step back and walk us through what either of us would need to do in order to play, say, Excitebike, a game that launched alongside the NES in 1985. I basically have three options, which I will present in descending order of fidelity:
- Play the game from a physical cartridge on either an original NES or one of the systems it was ported to, such as the Game Boy Advance.
- Play it from the NES Classic, an official Nintendo product launched in 2016 with 30 built-in games remastered for HDTVs.
- Emulate it using specialized software on a PC/Mac (a hassle if you aren’t technically minded) or within a web browser, both of which are legally dubious.
None of these options are ideal if you are accustomed to the seamless on-demand exprience of video/audio streaming and digital books in particular. And would you believe that Excitebike is probably a relatively easy game to dust off, since it: a) was released before the era of online gaming and downloaded content and b) is maintained by Nintendo, one of the world’s most historically conscious and nostalgic companies. Many games will not hold up as well.
As I see it, there are at least three major obstacles to the preservation of video games as art:
1. Disappearance of specialized hardware
Most games are designed to exploit the particular hardware of a given system. Super Mario 64 was constructed around the Nintendo 64’s distinctive analog stick, while GoldenEye 007 forever altered video game control schemes through its use of the trigger-like Z button on the same console. The Wii is home to countless games requiring motion controls, including its pack-in, Wii Sports, which is the best-selling console game of all time. Smartphone/tablet games are no different, with controls incorporating taps, swipes, and other gestures.
What happens when all this hardware is no longer readily available? We already know the answer, given the enormous demand that has chased the limited supply of NES Classic and SNES Classic consoles that bundle their respective titles into ready-to-play hardware. People will likely not play or experience those games anymore, unless they have a really convenient option for doing so (and DIY emulation doesn’t count).
Games that are emulated or ported to other platforms lose some of their original design, in a way that a book, painting, album, or movie cannot. For example, if I play Excitebike on my comptuer with a keyboard and infinite save states, that’s a very different experience than playing it on an original NES. In comparison, the differences between watching Citizen Kane on my phone and in an arthouse cinema seem minor.
2. Online functionality
Online gaming took center stage beginning in the late 1990s, with consoles such as the Sega Dreamcast and Microsoft Xbox incorporating internet connectivity infrastructure right out of the box (previous systems had required various aftermarket peripherals). The spread of broadband interent further fueled the rise of franchises that not only had online multiplayer functionality, but in some cases had nothing but that (the massively popular Destiny 2 is online-only, for example).
Of course, a sustainable online-only or online-mostly game requires a healthy community. Some games, such as World of Warcraft, have sustained their fanbases for years, while others have shut their doors after interest waned, rendering them unexperiencable to posterity.
Nintendo offers some prime examples of the tenuous nature of online games. Its Nintendo Wi-Fi Connection service, which powered many games on both the Wii and the DS, shut down in 2014 becuase it had been hosted on 3rd-party servers that were acquired in a merger. No one can go online anymore in Advance Wars: Days of Ruin or any other title reliant on the Wi-Fi Connection platform. Similarly, the company shut down Miiverse recently, leaving the lobby of the online shooter Splatoon weirdly vacant; it had previously been populated by virtual characters who, if you approached them, presented drawings made by players and saved to Miiverse servers.
3. Software updates
This flaw is not one I considered in my 2013 post, but I now think it may be the most significant of the three. To understand why, we have to ask first: Why even bother with game consoles in the first place?
A console is basically a shortcut. Instead of having to build your own gaming PC or purchase a super high-end mobile device and keep updating it every few years, you can purchase a standardized piece of hardware that will be good for at least 5-7 years before a successor is released. Plus, you can reset assured that any title released for the system will work on the hardware you purchased.
Consoles were once super distinct from PCs, since they had essentially no user-facing operating system. You couldn’t dig into their data management setups, change their network connections, or do anything you take for granted on other platforms, since they didn’t have any such features.
That began to change when consoles became internet-enabled and gained media playback capabilities, with the DVD-playing PlayStation 2 and Ethernet-equipped Xbox perhaps the first real inflection points. Today’s games often require enormous patches or updates to remain playable and secure, as do the system OSes they run on.
Updates are a particular weakness for phone/tablet games. Consider the iPhone: Every single year, it receives multiple new models, with fresh software APIs, updated chips, different screen resolutions/sizes, etc. Like clockwork, the presenters at the Apple keynotes talk about how these new features will make the device “console-level.” Yet iOS and Android are still most synonymous with free-to-play gambling games, which account for enormous amounts of all platform revenue, than with more in-depth gameplay. Why?
I think the endless upgrade cycle is partly to blame. One iOS game developer decided to leave the App Store altogether recently, saying (emphasis mine):
“This year we spent a lot of time updating our old mobile games, to make them run properly on new OS versions, new resolutions, and whatever new things that were introduced which broke our games on iPhones and iPads around the world. We’ve put months of work into this, because, well, we care that our games live on, and we want you to be able to keep playing your games. Had we known back in 2010 that we would be updating our games seven years later, we would have shook our heads in disbelief.”
There’s simply no guarantee that a game developed for any mobile platform will run even a few years later without proactive updates to save it from obsolescence. This issue doesn’t exist as much on consoles (since they are designed to be fixed systems with long lifespans), and especially not on older consoles. I can put a cartridge in a 1998 Game Boy and, barring any electrical or technical issues, be certain it will load and play as intended. I can’t say the same about an iOS game that hasn’t been updated since 2016.
The future of gaming history
The software update issue was raised by a blogger, Lukas Mathis, in a post about the wrongness of various other tech bloggers’ predictions about Nintendo. Between approximately 2011 and 2016, it was very fashionable to proclaim that Nintendo was failing and headed the way of Sega, i.e., toward being a software developer for other people’s hardware, instead of a hardware maker in its own right (Sega exited the console business in 2001, only ten years after its sweeping success with the Sega Genesis). A few choice quotes (all emphasis mine):
John Gruber in 2013, in a post comparing Nintendo to BlackBerry: “No one is arguing that 3DS sales haven’t been OK, but they’re certainly not great…Here is what I’d like to see Nintendo do. Make two great games for iOS (iPhone-only if necessary, but universal iPhone/iPad if it works with the concept). Not ports of existing 3DS or Wii games, but two brand new games designed from the ground up with iOS’s touchscreen, accelerometer, (cameras?), and lack of D-pad/action buttons in mind. (“Mario Kart Touch” would be my suggestion; I’d buy that sight unseen.) Put the same amount of effort into these games that Nintendo does for their Wii and 3DS games. When they’re ready, promote the hell out of them. Steal Steve Jobs’s angle and position them not as in any way giving up on their own platforms but as some much-needed ice water for people in hell. Sell them for $14.99 or maybe even $19.99.”
MG Siegler that same year: “I just don’t see how Nintendo stays in the hardware business. … I just wonder how long it will take the very proud Nintendo to license out their games.”
Marco Arment, responding to Siegler: “I don’t think Nintendo has a bright future. I see them staying in the shrinking hardware business until the bitter end, and then becoming roughly like Sega today: a shell of the former company, probably acquired for relatively little by someone big, endlessly whoring out their old franchises in mostly mediocre games that will leave their old fans longing for the good old days.
There’s endless more material like these pronouncements, all of it built on several (in my opinoin flawed) assumptions about the future of gaming: First, that it will from now on be irreversibly dominated by buttonless pieces of glass (i.e., phone and tablet screens) and the race-to-the-bottom pricing they encourage; second that gaming-specific hardware eventually won’t matter, since everything will be done on general-purpose computing devices; and third that developers like Nintendo can build sustainable businesses selling high-quality games for $20 or less, despite the enormous resources required to make something as daring as Super Mario Odyssey.
If the assumptions are correct, there seems little prospect of even today’s most famous games being preserved as “art,” since they’ll have to be endlessly redeveloped and remonetized to be sustainable. But what if the assumptions aren’t correct? What if mobile no more cannibalizes consoles that PCs did in the 1990s?
The punchline to those quotes is that Nintendo ended up selling 70 million 3DSes (almost on par with the PlayStation4 at the end of 2017) and saw the Switch have the best first year sales of any home console in U.S. history. It accomplished all of that while keeping online functionality and software updates relatively minimal in its first-party titles and going all-in on the bizarre, distinctive hardware of the Switch.
My first encounter with the Switch had me going back to my phone and thinking of the latter “this feels old.” Perhaps tapping on a phone screen isn’t the “end of history” of video gaming it has sometimes been presented as; maybe there’s a place for more sophisticated hardware after all. I hope so, since the production and preservation of such systems will be crucial if we are to ever have a real “art history” of video gaming.
[Note: I’m going through my enormous “drafts” folder and seeing if I can salvage any of the posts without changing their titles or opening lines. This is my first try]
Every generation has its battle between, on one hand, those who pine for the “old days” and, on the other, proponents of progress who inevitably think better things are preordained. I once probably found the former camp more offputting, due to their affection for activities – like hanging out in a Wal-Mart parking lot, the definitive form of group recreation during my high school years in Kentucky – they’ve outgrown; they make the past appear like baby clothes: impossible to fit back into it, but not impossible to recycle on someone else or hold up in reverie. Maybe even with the immense powers of the empty brain, they can make bygones keep happening.
But the progress camp has made a strong run of its own. “Look at these charts showing there have been fewer wars since 1945!” Yes, that’s a form of progress, but it might also be an historic anomaly, sustained only by norms around nuclear missiles, as Dan Carlin noted in a gripping podcast episode about the history of weapons of mass destruction.
Years ago, I entitled this post “The Battle of the Books” in hopes of discussing Jonathan Swift’s work of the same name, which features a debate between the Ancients and Moderns, each represented by equally fussy books in the St. James Library; hence my own much clumsier attempt to juxtapose the “glory days” crowd in opposition to the technoutopians. The piece focuses on how each camp thinks its particular era is the golden age of arts and letters. They’re allegorized by a spider (Moderns) and a bee (Ancients) who debate each other, prior to the actual authors of each era (everyone from Homer to Hobbes) engaging in actual violent combat.
While short, this satricial piece is, in my view, among the tightest and most quotable works of prose in English. It leads with a stunning self-referential opening line [all emphasis throughout is mine] – “Satire is a sort of glass wherein beholders do generally discover everybody’s face but their own” – and never relents.
The quip “anger and fury, though they add strength to the sinews of the body, yet are found to relax those of the mind” comes to mind equally during vigorous exercise or the frustrating angry exchanges of email and other internet-connected tools that do nothing for the body while sending the mind into a tailspin.
This segment reminds me of Elizabethan language about daggers and spears, but in my opinion supersedes Shakespeare et al. in the nuance it conveys about how writing has both an empowering and destructive effect on its most talented executors: “[I]nk is the great missive weapon in all battles of the learned, which, conveyed through a sort of engine called a quill, infinite numbers of these are darted at the enemy by the valiant on each side, with equal skill and violence, as if it were an engagement of porcupines. This malignant liquor was compounded, by the engineer who invented it, of two ingredients, which are, gall and copperas; by its bitterness and venom to suit, in some degree, as well as to foment, the genius of the combatants.
He then progresses to talk about the unbearable process of insisting your argument is better than anyone else’s, but notes that even the most definitive “trophy” of literary achievement ultimately become artifacts of controversy to be potentially dissolved by latter debates, like the groups I mentioned earlier who are ever looking forward: “These trophies have largely inscribed on them the merits of the cause; a full impartial account of such a Battle, and how the victory fell clearly to the party that set them up. They are known to the world under several names; as disputes, arguments, rejoinders, brief considerations, answers, replies, remarks, reflections, objections, confutations. For a very few days they are fixed up all in public places, either by themselves or their representatives, for passengers to gaze at; whence the chiefest and largest are removed to certain magazines they call libraries, there to remain in a quarter purposely assigned them, and thenceforth begin to be called books of controversy. In these books is wonderfully instilled and preserved the spirit of each warrior while he is alive; and after his death his soul transmigrates thither to inform them.”
This is exquisite commentary on the ever-living characteristics of books: “a restless spirit haunts over every book, till dust or worms have seized upon it.”
On the high ambitions but limited abilities of the Moderns; sounds like this could have been penned about proponents of perpetually underwhelming tech like virtual reality and autonomous cars: “for, being light-headed, they have, in speculation, a wonderful agility, and conceive nothing too high for them to mount, but, in reducing to practice, discover a mighty pressure about their posteriors and their heels.”
Swift also effortlessly shifts to some of the best speculative writing I’ve encountered, on par if not better than what he pulled off in “Gulliver’s Travels.” Witness this passage about a spider and a bee: The avenues to his castle were guarded with turnpikes and palisadoes, all after the modern way of fortification. After you had passed several courts you came to the centre, wherein you might behold the constable himself in his own lodgings, which had windows fronting to each avenue, and ports to sally out upon all occasions of prey or defence. In this mansion he had for some time dwelt in peace and plenty, without danger to his person by swallows from above, or to his palace by brooms from below; when it was the pleasure of fortune to conduct thither a wandering bee, to whose curiosity a broken pane in the glass had discovered itself, and in he went, where, expatiating a while, he at last happened to alight upon one of the outward walls of the spider’s citadel; which, yielding to the unequal weight, sunk down to the very foundation.”
A highly recognizable critique of filibustering senators and “contrarians” of all sorts who like nothing more than argument itself, undercutting the very “trophies” they were earlier cited as “At this the spider, having swelled himself into the size and posture of a disputant, began his argument in the true spirit of controversy, with resolution to be heartily scurrilous and angry, to urge on his own reasons without the least regard to the answers or objections of his opposite, and fully predetermined in his mind against all conviction.
The spider poetically describes a bee: “[B]orn to no possession of your own, but a pair of wings and a drone-pipe. Your livelihood is a universal plunder upon nature; a freebooter over fields and gardens; and, for the sake of stealing, will rob a nettle as easily as a violet.”
More on the temporarity of literary achievement and fame, of trophies than can easily fade,: “Erect your schemes with as much method and skill as you please; yet, if the materials be nothing but dirt, spun out of your own entrails (the guts of modern brains), the edifice will conclude at last in a cobweb; the duration of which, like that of other spiders’ webs, may be imputed to their being forgotten, or neglected, or hid in a corner.”
On what Ancients see in the itinerant art of the bee, which behaves like a poet searching for magical inspiration but knowing that legwork (literally, in this case) is necessary: “As for us, the Ancients, we are content with the bee, to pretend to nothing of our own beyond our wings and our voice: that is to say, our flights and our language. For the rest, whatever we have got has been by infinite labour and search, and ranging through every corner of nature; the difference is, that, instead of dirt and poison, we have rather chosen to till our hives with honey and wax; thus furnishing mankind with the two noblest of things, which are sweetness and light.”
Setting the table with cosmic implications: “Jove, in great concern, convokes a council in the Milky Way. The senate assembled, he declares the occasion of convening them; a bloody battle just impendent between two mighty armies of ancient and modern creatures, called books, wherein the celestial interest was but too deeply concerned.”
A fantastical personification of criticism as a vicious and ill-informed goddess: “Meanwhile Momus, fearing the worst, and calling to mind an ancient prophecy which bore no very good face to his children the Moderns, bent his flight to the region of a malignant deity called Criticism. She dwelt on the top of a snowy mountain in Nova Zembla; there Momus found her extended in her den, upon the spoils of numberless volumes, half devoured. At her right hand sat Ignorance, her father and husband, blind with age; at her left, Pride, her mother, dressing her up in the scraps of paper herself had torn. There was Opinion, her sister, light of foot, hood- winked, and head-strong, yet giddy and perpetually turning. About her played her children, Noise and Impudence, Dulness and Vanity, Positiveness, Pedantry, and Ill-manners. The goddess herself had claws like a cat; her head, and ears, and voice resembled those of an ass; her teeth fallen out before, her eyes turned inward, as if she looked only upon herself; her diet was the overflowing of her own gall; her spleen was so large as to stand prominent, like a dug of the first rate; nor wanted excrescences in form of teats, at which a crew of ugly monsters were greedily sucking; and, what is wonderful to conceive, the bulk of spleen increased faster than the sucking could diminish it.”
The best critique of “grammar hounds” and anyone else more obsessed with technical features than with clear meaning: “[B]y me beaux become politicians, and schoolboys judges of philosophy; by me sophisters debate and conclude upon the depths of knowledge; and coffee-house wits, instinct by me, can correct an author’s style, and display his minutest errors, without understanding a syllable of his matter or his language; by me striplings spend their judgment, as they do their estate, before it comes into their hands. It is I who have deposed wit and knowledge from their empire over poetry, and advanced myself in their stead. And shall a few upstart Ancients dare to oppose me?”
A thrilling description of Criticism influencing the discourse, with an especially striking line about “now desert” bookshelves: “The goddess and her train, having mounted the chariot, which was drawn by tame geese, flew over infinite regions, shedding her influence in due places, till at length she arrived at her beloved island of Britain; but in hovering over its metropolis, what blessings did she not let fall upon her seminaries of Gresham and Covent-garden! And now she reached the fatal plain of St. James’s library, at what time the two armies were upon the point to engage; where, entering with all her caravan unseen, and landing upon a case of shelves, now desert, but once inhabited by a colony of virtuosos, she stayed awhile to observe the posture of both armies.
Even amid the verbal pyrotechnics, Swift finds time to be unforgettably funny: “Then Aristotle, observing Bacon advance with a furious mien, drew his bow to the head, and let fly his arrow, which missed the valiant Modern and went whizzing over his head; but Descartes it hit; the steel point quickly found a defect in his head-piece; it pierced the leather and the pasteboard, and went in at his right eye. The torture of the pain whirled the valiant bow-man round till death, like a star of superior influence, drew him into his own vortex.”
Even better, about Virgil struggling with an ill-fitting helmet and appealing to Dryden for help: “The brave Ancient suddenly started, as one possessed with surprise and disappointment together; for the helmet was nine times too large for the head, which appeared situate far in the hinder part, even like the lady in a lobster, or like a mouse under a canopy of state, or like a shrivelled beau from within the penthouse of a modern periwig; and the voice was suited to the visage, sounding weak and remote.”
A memorable closing line to pair with the opening: “Farewell, beloved, loving pair; few equals have you left behind:
Income inequality is an inescapable topic in American political discourse in 2017. It’s probably more accurate to talk about “wealth inequality,” though, since the most influential elites and corporations derive the bulk of their monies from the passive appreciation of assets (like stocks and bonds), rather than from paychecks. That quibble aside, why is inequality an issue worth talking about? Let’s look back to an event that ended a century ago this year – the First World War.
On the eve of World War I, the top 1% of British residents controlled a staggering 70 percent of its wealth. Similar gaps prevailed in France and Germany. These nations were the pivotal actors of the conflict, with Russia, the U.S., Austria-Hungary and Italy its secondary players. Inequality was an essential feature of all the pre-WWI societies in Europe and North America that had just emerged from the Gilded Age.
At the same time, many of these countries were in fact empires, overseeing vast territorial holdings spanning the globe. The U.K. and France were the preeminent colonial powers, but almost every industrialized country at the time, from the U.S. to Japan, had gotten in on the game starting in the late 1800s (indeed, the Anglo-Russian struggle for contorl of Central Asia was called “the Great Game”).
Inequality and imperialism were interrelated. With so much of all the western world’s wealth controlled by so few, there was an oversupply of money seeking out an inadequate amount of investment opportunities. The surplus of investible assets was driven by poor domestic aggregate demand stemming from inequality; hence, the need to continually look abroad for speculative openings offering high returns.
More specifically, colonial empires and massive militaries were the direct consequences of the disproportionate influence of a tiny, wealthy set of elites driving major policy decisions. Incidents such as the First Moroccan Crisis illustrated the high stakes of holding onto remote territory. Meanwhile, expansionism into Africa and Asia was reinforced by the growing power of corporate monopolies and cartels seeking to broaden their market pentetration to global scale.
We all know how World War I was resolved, with Germany in ruins, Russia converted to the USSR, and the U.S. with a newly assertive role in global politics, at least temporarily. But we don’t know how the next such crisis of inequality and imperialism – namely the one occurring right now – will end.
Since the Asian financial panic of the late 90s, the global economy has been dominated by speculative bubbles that were products of too much capital chasing too few opportunities. After Asia, there was the dotcom bust in 2001, the housing meltdown in 2008, and the current absurdities in crypto currency (e.g., Bitcoin) and Silicon Valley (raw water, anyone)?
Along the way, there has also been considerable consolidation in virtually every industry in the U.S. Mega mergers of hospitals, telecoms, retailers, etc. have concentrated growing amounts of power in fewer hands. Gigantic corporations including Microsoft, Comcast, AT&T, and Amazon, far from being forces for progress and inclusion as their modern PR-tailored images might suggest, have now aligned themselves with the far right-wing of the Republican Party to ensure low corporate tax rates. This is why you can’t separate the business aspects of the GOP from its racism; business supports provides the resources the party needs to exploit disadvantaged groups on other fronts.
Big business was central to the chaos that preceded WWI, primarily through its stake in colonial empires and military spending. Decades later, German companies were pivotal in convincing President Paul Von Hindenburg to appoint Hitler as chancellor, despite the latter’s defeat in the 1932 presidential election and his party’s lack of a governing majority in the Reichstag. It was big business, not the people led by “populism,” that enabled the most destructive warfare of all-time.
I’m not saying we’re heading for another 1914-1945 cataclysm. We should be wary, though, of how inequality is surging at a time when corporations are consolidating and supporting politicians who also favor enormous mlitary spending and possible adventurism in theaters such as Iran and North Korea.