When Donald Trump was elected president, many on both the left and the right presented him as a break with the status quo of American conservatism. Trump, they argued, represented a descent into a new, more vulgar ideology that was far removed from normal right-wing discourse and practice.
There were, and continue to be, countless paeans to how previous GOP presidents, mostly Bush 43 and Reagan, were “different” in some capacity, along with hackneyed attempts to separate “honorable” Republicans like Paul Ryan from the dark underbelly of the Republican Party that Trump helped mainstream. Bush 43 giving Michelle Obama a piece of candy during John McCain’s funeral, while an incredibly low-stakes act, represents this fiction of a discrete, distinctively pre-Trump GOP perfectly, i.e. remember when the Norms Were Good and conservatives Deserved Respect Even If You Didn’t Agree With Them.
In reality, there’s an incredible continuity in the Republican Party, not just between Trump and Bush but all the way back at least to the reactionaries of the 1960s. Trump is a totally normal Republican politican.
Start with Brett Kavanaugh. Trump elevated him to the Supreme Court, but Bush 43 made him a judge in the first place. Kavanaugh also worked in the Bush White House, corresponding with the likes of John Yoo on torture. Before that, he worked for Ken Starr investigating various Clinton scandals, alongside Rod Rosenstein and Alex Azar – both future Trump Cabinet appointees.
Starr was of course Solictor General under Bush 41, who like Trump nominated to the Supreme Court a manifestly unfit judge credibly accused of sexual asault and then chose to stick by that nomination despite significant backlash. Like Trump, Bush 41 was accused of harassment by multile women, yet persists for many as a paragon of a different, gentler kind of GOP president. That’s a weird formulation for the man who employed Lee Atwater, mastermind of the Trumpesque race-baiting Willie Horton ads of 1988.
Bush 41 was Reagan’s VP for 8 years. Reagan is positively remembered even by some Democratic politicians, despite numerous similarities with Trump beyond just their shared background in entertainment. Reagan referred to black men as “strapping young bucks” and perpetuated the racialized welfare queen myth. He associated himself with segregationist Strom Thurmond and launched his 1980 campaign in Philadelphia, Missisippi, the site of a KKK lynching of volunteers during the Freedom Summer of 1964.
Reagan was very conservative, even for the time – he challenged Gerald Ford from the right in 1976 and almost won, which is amazing in retrospect since Ford himself had incredible bona fides as a conservative. He helped stanch the damage to the GOP from Watergate by pardoning Nixon, plus he employed two pivotal figures who would go on to do immense damage in the Bush 43 administration – Dick Cheney and Donald Rumsfeld, who served as WH Chief of Staff and Secretary of Defense, respectively, in the Ford administration.
Nixon’s similarities to Trump are almost too obvious to discuss. A paranoid yet clever politician constantly in legal and ethical trouble, he precipitated one of the biggest crises in the history of American governance. Alongside Ford, he also campaigned heartily for Barry Goldwater in 1964.
The Goldwater presidential campaign is really where modern American conservatism coalesceded into a coherent movement. Goldwater captured the rabid racists who were once stalwarts of the Southern Democratic Party, in the process breaking the Solid South to win overwhelming victories in Mississippi, Alabama, and most of the Deep South. The KKK loved him. He opposed the various iterations of the Civil Rights Act as oversteps of big government, yet wholeheartedly supported aid to Rhodesia/Zimbabwe in support of its ruling white nationalists – as if the question of whether “big government” is good is simply a matter of which race it’s hurting.
Brett Kavanaugh’s elevation to the Supreme Court is just one bridge on the continuous highway of conservatism from Goldwater to Trump. There’s no golden age of an honest, ethical, and sane Republican Party, clearly removed from Trumpsim, to look back on at almost any point after the Eisenwhoer Administration. “Noble” or “moderate” Repubclians won’t save us, as Susan Collins’ ‘yes’ vote for Kavanaugh demonstrates.
Go back in time with me to the 1860s. The Fourteenth Amendement to the U.S. Constitution has just been drafted, containing the following text:
“All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.”
This is one of the most important sentences in American history. It affords to anyone born on U.S. soil the full rights of U.S. citizenship – or does it?
Conservatives including Jeff Sessions, Ann Coulter, and Michael Anton (who wrote an odious op-ed for The Washington Post on this very topic) have long crusaded against this clause of the Fourteenth Amendment, despite its clear and obvious meaning and intent. The amendment was first meant to ensure that descendants of slaves were Americans, but its power extends far beyond that cause, as it simplifies the entire project of citzenship without requiring the complex and often controversial de sanguis (by blood) systems of many other countries. It makes assimilation easy.
So why does the hard right rail against its original intent? Because they think it’s a giveaway to “illegal immigrants” who can enter the country, have children, and be certain those children are Americans.
You’ll notice I put quotes around “illegal immigrants.” That’s because, in the 1860s, when the 14th Amendment was ratified, there was no such thing as an “illegal immigration.” It’s a modern concept and would have been incomprehensible in 19th century America, where virtually every ancestor of any person calling himself/herself an American arrived via a method that we would, going by latter-day legislation, in theory call “illegal” but don’t because, well, then everyone’s perceived legitimacy in the country would be at stake.
So an originalist reader of the 14th Amendment would clearly have to say that, nope, you can’t interpret it as something meant to exclude “illegal immigrants” and their families from the rights of citizenship, since no such distinction between legal/illegal migration existed at the time. You’ll be shocked to learn that conservative originalists – i.e., people in the legal community who purport to interpret the Constitution in the context of its original meaning at the time of enactment – don’t hold this position.
They’re not just hypocrites – they’re subscribers to an incoherent worldview. Originalism is often contrasted with “living constitutionalism,” the practice of reading the Constitution as a living document whose meaning changes with time and requires new readings aligned with the culture at-large. the implication is oftne that while living constitutionalists (read: liberals) are “legislating from the bench” as “judicial activists,” conservative originalists are simply following the letter of the law. This is absurd, and not just because of the hypothetical 14th Amendment issue I raised. There are so many cases in which this comes through:
Perhaps the most infamous, the 2nd Amendment contains (indeed, starts with!) the phrase “A well-regulated militia, being necessary…” Gun ownership is framed right then and there in the context of military service, not an individual right to own as many assault rifles (which didn’t even exist in the 1700s) as possible. Yet the latter has become the bog-standard position of conservative “originalists.”
A refresher; this amendment says:
“The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.”
And yet it so often is, by voter ID laws and other bullshit that is often explicitly targeted at black voters. Original intent would preclude any such barriers, yet conservative “originalists” are invariably the ones pushing policies in this realm, from statehouses to the judiciary.
The endless conservative assautls on the Voting Right Act demonstrates the bad faith of originalists, who are happy to ignore both the intent of the Reconstruction Amendments (13th-15th) and the will of the Congress that enacted the voting-specific legislation (as explicitly autohrized by the text of those amendments) to instead read arcane theories about the “sovereign dignity” of the states into a Constitution that doesn’t contain them. Shelby County v. Holder is the relevant case here.
Don’t fall for originalist jargon from the likes of Neil Gorsuch or pending Supreme Court nominee Brett Kavanaugh. They’re conservative reactionaries opposed to equality, and that’s that.
The retirement of U.S. Supreme Court Associate Justice Anthony Kennedy under a GOP president was one of the most predictable crises of recent years, but one that nevertheless seems to have triggered, almost overnight, a sea-change in how the left talks about the judiciary. Sure, there have been the occasional pleas for “moderate” nominees from President Donald Trump. Yet even milquetoast progressive legal commentators such as Ian Millhiser of Think Progress – whose greatest hits include calling Antonin Scalia a “great scholar” – have now called for court packing. There has been a notable shift in the Overton Window about the Supreme court.
“Court packing” has a negative connotation – it stirs up images of strongmen trying to work the system by rigging its membership in their favor. It shouldn’t, though; from almost day one of the United States, court packing has been a germane concern for the other two branches of government, and for wholly practical reasons. Court packing is an American as apple pie, and it’s needed more than ever.
The Supreme Court: A cure worse than the disease
The case that created the Supreme Court as we know it was about court packing. Marbury v. Madison concerned the last-ditch effort of outgoing President John Adams to fill up the judiciary with Federalist judges before Thomas Jefferson took office. Jefferson’s inauguration marked the first time the Federalists would not have control of the executive branch after 12 years of George Washington plus John Adams, and they were scared – like any American conservative party reliquinshing its control of governmnet, they saw the unaccountable, life-appointed judiciary as the rearguard of its power.
The Supreme Court’s ruling in Marbury v. Madison established the precedent of judicial review, which permits the judiciary to take another look at executive and legislative actions. Nowhere in the U.S. Constitution is the Supreme Court afforded the power to strike down laws or even to deterine their constitutionality; though that right is hinted at in some of the Federalist Papers, it was primarily engineered by Chief Justice John Marshall in Marbury v. Madison to resolve one of the many intractable problems arising out of the separation of powers fundamental to the U.S. political system: How should a new administration execute the commissions of judges already nominated and approved by members of the opposing party?
Of its many flaws, the U.S. Constitution’s lack of foresight about the rise of political parties is one of the most significant. It’s amazing in retrospect that the “geniuses” behind it didn’t think that, say, having the Senate and the presidency divided between fiercely opposed factions might grind the government to a halt. Even though the Supreme Court was almost certainly not intended to be a super-legislature, it became that in part because the uniquely inefficient design of the American government often leaves no clear resolution to partisan disputes.
Following Marbury v. Madison, the Supreme Court took on an activist role it has since never let go of. It has weighed in on every issue from the Fugitive Slave Act (“it’s good, basically,” to paraphrase the court led by Jackson appointee Roger Taney) to Japanese interment (“also good,” to summarize the Harlan Stone court’s opinion in Korematsu v. United States), almost always on the wrong side morally. Why the consistently awful opinions?
Because the Supreme Court is a fundamentally conservative institution. It’s the least accountable branch of the federal government due to lifetime tenure, plus it’s consistently staffed by the bourgeoise since many presidents have preferred highly credentialed lawyers or previous politicians as nominees. That’s a recipe for dominance by conservative white men, who can issue whatever opinions they like with virtually no fear of consequences.
The Supreme Court is, as such, a horrible cure to the disease of divided government: Whenever party disputes grind the participatory political system to a halt, the court steps in to give the reactionary perspective of the upper classes. Its conservatism is the product of not only of its insularity from democratic society, but also because of its links to two anti-democratic instiutions: The Senate and the Electoral College, whose power it reinforces.
These institutions over-represent rural white populations and have helped sustain conservatism in the U.S. despite immense social change. For most of its history, the U.S. has lacked anything resembling a liberal political party, in part because of the constraints created by the Senate and Electoral College. Accordingly, elections often go to conservative politicians who nominate and approve conservative judges.
The first somewhat liberal party to emerge was the Republican Party of the 1860s, which was founded with the goal of abolishing slavery. However, it quickly ran up against the obstacle of an entrenched judiciary appointed by multiple Democratic and Whig presidents. In response, Abraham Lincoln temporarily packed the court to ensure a favorable majority at a time when more than half the country was committed to the cause of abolition via victory over the Confederacy. He took a step to ensure the judiciary was better aligned with the popular will and moment.
Following the presidency of Rutherford Hayes, the country sunk back into almost compelte conservatism, with a neoconfederate Democratic Party and a business-friendly Republican Party. The courts of this era paved the way for Lochnerism, the anti-regulatory almost libertarian doctrine that prevailed until the 1930s, when President Franklin Roosevelt pushed back against judicial hostility to the New Deal. The Supreme Court finally relented after FDR drew up a plan to pack it with Democrats, who by this time were becoming much more progressive.
How the Supreme Court became one of the highest-stakes battlegrounds
Most presidents have won the popular vote at least once, and the Senate used to be slightly more equitable in the days before a handful of states like California, Texas, Florida, and New York (with a combined population larger than Germany’s) came to dominate the population distribution, although this was offset by the fact that until the 1910s it wasn’t even directly elected. Over time, though, changes in political coalitions and demographics have enabled small pluralities or even minorities of the electorate to elect the politicans who in turn appoint life-tenured judges who basically sit on an unaccountable super-legislature.
That’s a disaster for democracy. The real turning point came in 1968. From the 1950s until then, the 20 consecutive years of Democratic rule under FDR and Harry Truman meant the courts were stacked with liberal appointees, who in turn oversaw an anomalous streak of progressive rulings such as Brown v. Board of Education and Miranda v. Arizona. The traditionally conservative tide of the judiciary had ebbed, and you better believe that America’s reactionaries noticed. When it came time for Lyndon Johnson to replace Earl Warren, Senate Republicans filibustered Abe Fortas’ nomination as Chief Justice, leaving both him and Warren on the court to eventually be replaced by Richard Nixon, who had won 43.4 percent of the vote and promptly went on to nominate 4 Supreme Court justices including future Chief Justice William Rehnquist.
Since 1969, there has not been a single day when a majority of the court’s justices had been appointed by Democratic presidents. The Supreme Court has remained a vanguard of conservative power even as Democrats have come to dominate presidential elections. With the ongoing polarization of the two major parties, the conditions have long been right for a court that is even more divorced from public opinion that it normally is, since even a narrow electoral victory -– like Nixon in 1968, Bush in 2000, and Trump in 2016 – can now lead to a complete partisan transformation of the judiciary. The stakes for every presidential election have been dramatically raised as the Supreme Court became less accountable, more activist (since Congress now neglects many of its traditional responsibilities in areas like immigration and trade), and – ironically – more burnished with the veneer of respectability, since many across the political spectrum see judges as uniquely credible and nonpartisan, despite all evidence to the contrary. Here’s what Thomas Jefferson had to say about judges after Marbury v. Madison [emphasis mine]:
“You seem to consider the judges as the ultimate arbiters of all constitutional questions; a very dangerous doctrine indeed, and one which would place us under the despotism of an oligarchy. Our judges are as honest as other men, and not more so. They have, with others, the same passions for party, for power, and the privilege of their corps…. Their power [is] the more dangerous as they are in office for life, and not responsible, as the other functionaries are, to the elective control. The Constitution has erected no such single tribunal, knowing that to whatever hands confided, with the corruptions of time and party, its members would become despots. It has more wisely made all the departments co-equal and co-sovereign within themselves.”
The Supreme Court has also held on to a semblance of legitimacy because its median vote for the past 50 years has been a country club-style Republican, whether Sandra Day O’Connor or Anthony Kennedy, who while basically conservative has been willing to vote with liberals in enough cases to keep both sides of the political spectrum somewhat content. Court-oriented liberal activism on issues such as LGBTQ+ rights has flourished in the period as Congress has stagnated and overall governmental gridlock has worsened.
With Kennedy’s retirement, that careful balance between liberals, conservatives, and a handful of nominal swing votes is gone. We’re now on the verge of a president who won a lower percentage of the popular vote than Michael Dukakis in 1988 having appointed 2 of the 9 justices, while 2 more were appointed by George W. Bush, who also lost the popular vote. Moreover, these judges are all documented ideologues from the Federalist Society, a professional group of conservative lawyers committed to something called originalism.
Originalism purports to be interpreting the Constitution “strictly,” which in practice means “conservatively.” Why anyone in 2018 would want to interpret literally and narrowly a document containing a clause saying slaves are 3/5ths of a person is beyond me, but the right wing as well as large chunks of the media seem to think that this is a more legitimate approach to the law than the “living constituionalism” of liberal judges who account for practical changes in society since the 18th century. Originalism has given us Clarence Thomas, Samuel Alito, and Neil Gorsuch, among may others.
A court filled by such individuals will be hostile to progressive legislation, often bending over backward to find tortured reasons to weaken or overturn it. For example, Chief Justice John Roberts’ opinion in Shelby County v. Holder, the case nullifying large parts of the Voting Rights Act of 1965, is basically a rewrite of the logic in the Dred Scott v. Sanford decision that was so hated even in the 1860s that it inspired multiple constitutonal amendments (the 13th-15th). The Supreme Court’s origins as a racist institution are alive and well as long as the Federalist Society is dictating terms to GOP presidents.
Why court packing is the answer
Court packing by a Democratic administration in conjunction with a Demoratic Congress could expand the Supreme Court to any size, canceling out the appointments by minority-rule presidents. It’s the only way to reign in an institution that has become increasingly removed from democracy and accountability. If it results in the decline of judicial review as a tool, even better – despite a handful of good activist decisions (like Obergefell v. Hodges), the Supreme Court should not be deciding which laws are and aren’t good enough, often in contravention of the public.
A Supreme Court whose size can be changed at any time is one that is far more answerable to Congress, without any need for the impractical solution of impeachment, the nominal check on judges in the Constitution. I don’t think the Constitution is all that great as either a political or moral document, and its failings are a major reason why we’re in this situation of having to pray that octogenarian judges don’t retire and get replaced by 40something neoconfederates. Short of amending the Constitution, court packing is the best solution and should absolutely be on the table for the next Democratic government.
In the Bernard Shaw play “Caesar and Cleopatra,” Cleopatra whips up a baldness “cure” for Caesar consisting of a wild mix of ingredients including burnt mice and horse’s teeth. Shaw himself notes that he doesn’t understand the ingredients and of course it doesn’t work because nothing does.
In the decades that have passed since the play and the millennia since the historic Caesar and Cleopatra lived, there hasn’t been much progress in treating androgenic alopecia, the most common form of baldness, especially compared to advances in, say, antibiotics. The 1990s saw the mass release of Rogaine and Propecia as well as markèd improvements in hair transplantation surgery, but a Shavian cure is apparently still far off. None of the current remedies provide a definitive solution and they can at best hold progression of balding at bay for a few years.
I became very interested in androgenic alopecia in the early 2010s when I noticed some thinning of my own hair. During my research, the insufficiency of the associated treatments struck me: Both Rogaine and Propecia have to be taken indefinitely to maintain their benefits, which are often subtle to begin with. Transplantation is expensive and often must be supplemented with Propecia. Other treatments are by and large outlandish and unproven.
The fundamental problem, as I see it, is that baldness requires both a scientific and and an artistic solution. It’s not enough for the underlying scientific process to be sound in disrupting the mechanisms of androgenic alopecia; the results of the treatment must also be dramatic and cosmetically acceptable.
This double requirement is why hair transplantation results vary so much between surgeons, some of whom are good artists and others not. It’s also why many treatments that seemingly work in vitro – like cloning ones hairs – don’t carry over to the real world, since it’s difficult to ensure that the right size, color, and direction can be achieved in vivo. Baldness, at its core, is an artistic concern.
Unsurprisingly, given its unique difficulties, baldness has inspired a truly weird set of treatments:
- A prescription-only pill that doubles as urinary retention medication for elderly men (Propecia).
- A blood pressure medication that grows hair for reasons that are still not fully understood (Rogaine, originally known as Loniten).
- A form of alternative medicine (low-level laser therapy).
- Re-injection of one’s own processed blood into the scalp (platelet rich plasma).
- Artistic rearrangement of follicles (transplantation).
Of these treatments, by far the most discussed and the most controversial is Propecia, also known by its chemical name, finasteride. It interrupts the conversion of testosterone to DHT, a more potent compound that attacks follicles in the genetically susceptible. That’s a pretty basic process to screw with, at least in males.
Nevertheless, reactions to finasteride are wide-ranging, with some takers reporting horrible side effects such as permanent erectile dysfunction and depression, while others praise it as the best cosmetic “medication available – a “happy pill. Personally, as someone who took it for years, I think it falls somewhere in between.
Its side effects are considerable and run the gamut from the subtle (difficulty sleeping) to the overt (erectile dysfunction). The original clinical trials for its approval reported very low rates of side effects, which have been repeatedly held up as proof that the many people complaining about its adverse effects are lying. At the same time, its benefits are slight compared to other “lifestyle” drugs such as Accutane, which while boasting an even worse side effect profile can dramatically resolve cystic acne for good; in contrast, finasteride must be taken continuously just to preserve the status quo.
The experience of taking finasteride reminded me of taking antidepressants years ago. I remember feeling lousy on both medications and attributing my feelings to them not having kicked in yet. In reality, they were the sources of my problems, including loss of sex drive and weight gain. I only learned years later that estimates of their sexual side effects in particular were vastly underestimated; my prescribing psychiatrist refused to believe they could have these effects, but later research has drawn similarities between the long term health issues caused by SSRI inhibitors snd finasteride, both of which have complex effects on the brain.
In any case, finasteride’s side effects piled up for me over time, culminating in higher blood pressure and substantial weight gain yet again. I quit cold turkey and felt the same liberation I had back in 2006 when I ditched antidepressants and entered a much better phase in my life.
Finasteride is not an essential medication, even for its other indication for benign prostatic hyperplasia. It doesn’t save lives. Its potential for side effects, especially over the long term, and the possibly wide extent of these effects in the central nervous system and the liver give me pause. I don’t trust it anymore and so I won’t be taking it again. I wouldn’t recommend it to anyone unless he was truly desperate, as apparently I was years ago when I started.
Stopping it has freed from fretting so much about my hair, a concern whose hold on me I didn’t even appreciate until I finally let it go. I still do a few minor things to keep it styled and looking healthy, but if it goes, so what? I feel like hair anxiety is such a 20-something thing and, moreover, such a straight thing – so many message board posts about balding are about “oh women won’t like me anymore once I’m bald.” This doesn’t apply to me, obviously, as a married gay man in his 30s. I don’t want to be chasing my youth instead of simply accepting aging and being grateful for an ongoing healthy life.
Years ago, I joined the conversation about whether video games constitute “art.” The late Roger Ebert spawned a thousand hot takes by refusing to classify them as such, arguing that their winnability set them aside from classical art forms that cannot be won or lost, only experienced. I wrote this on the subject almost five years:
“Classic [Nintendo Entertainment System, hereafter “NES”] and [Super Nintendo Entertainment System, hereafter “SNES”] games are nowadays mostly playable only via emulation. Imagine if you could only watch The Thief of Baghdad or The Birth of a Nation by “emulating” (or actually using!) an early 20th century era projector and screen. Of course, that isn’t the case – you can watch either one on an device that has Netflix on it. Similarly, imagine if the works of Shakespeare could only be read on 17th century folio paper and were essentially illegible on anything printed after that time. Such a reality would be absurd, but it’s basically the issue that plagues video games: their greatness, with precious few exceptions, isn’t transferrable across eras.”
If you are not a frequent gamer, allow me to take a step back and walk us through what either of us would need to do in order to play, say, Excitebike, a game that launched alongside the NES in 1985. I basically have three options, which I will present in descending order of fidelity:
- Play the game from a physical cartridge on either an original NES or one of the systems it was ported to, such as the Game Boy Advance.
- Play it from the NES Classic, an official Nintendo product launched in 2016 with 30 built-in games remastered for HDTVs.
- Emulate it using specialized software on a PC/Mac (a hassle if you aren’t technically minded) or within a web browser, both of which are legally dubious.
None of these options are ideal if you are accustomed to the seamless on-demand exprience of video/audio streaming and digital books in particular. And would you believe that Excitebike is probably a relatively easy game to dust off, since it: a) was released before the era of online gaming and downloaded content and b) is maintained by Nintendo, one of the world’s most historically conscious and nostalgic companies. Many games will not hold up as well.
As I see it, there are at least three major obstacles to the preservation of video games as art:
1. Disappearance of specialized hardware
Most games are designed to exploit the particular hardware of a given system. Super Mario 64 was constructed around the Nintendo 64’s distinctive analog stick, while GoldenEye 007 forever altered video game control schemes through its use of the trigger-like Z button on the same console. The Wii is home to countless games requiring motion controls, including its pack-in, Wii Sports, which is the best-selling console game of all time. Smartphone/tablet games are no different, with controls incorporating taps, swipes, and other gestures.
What happens when all this hardware is no longer readily available? We already know the answer, given the enormous demand that has chased the limited supply of NES Classic and SNES Classic consoles that bundle their respective titles into ready-to-play hardware. People will likely not play or experience those games anymore, unless they have a really convenient option for doing so (and DIY emulation doesn’t count).
Games that are emulated or ported to other platforms lose some of their original design, in a way that a book, painting, album, or movie cannot. For example, if I play Excitebike on my comptuer with a keyboard and infinite save states, that’s a very different experience than playing it on an original NES. In comparison, the differences between watching Citizen Kane on my phone and in an arthouse cinema seem minor.
2. Online functionality
Online gaming took center stage beginning in the late 1990s, with consoles such as the Sega Dreamcast and Microsoft Xbox incorporating internet connectivity infrastructure right out of the box (previous systems had required various aftermarket peripherals). The spread of broadband interent further fueled the rise of franchises that not only had online multiplayer functionality, but in some cases had nothing but that (the massively popular Destiny 2 is online-only, for example).
Of course, a sustainable online-only or online-mostly game requires a healthy community. Some games, such as World of Warcraft, have sustained their fanbases for years, while others have shut their doors after interest waned, rendering them unexperiencable to posterity.
Nintendo offers some prime examples of the tenuous nature of online games. Its Nintendo Wi-Fi Connection service, which powered many games on both the Wii and the DS, shut down in 2014 becuase it had been hosted on 3rd-party servers that were acquired in a merger. No one can go online anymore in Advance Wars: Days of Ruin or any other title reliant on the Wi-Fi Connection platform. Similarly, the company shut down Miiverse recently, leaving the lobby of the online shooter Splatoon weirdly vacant; it had previously been populated by virtual characters who, if you approached them, presented drawings made by players and saved to Miiverse servers.
3. Software updates
This flaw is not one I considered in my 2013 post, but I now think it may be the most significant of the three. To understand why, we have to ask first: Why even bother with game consoles in the first place?
A console is basically a shortcut. Instead of having to build your own gaming PC or purchase a super high-end mobile device and keep updating it every few years, you can purchase a standardized piece of hardware that will be good for at least 5-7 years before a successor is released. Plus, you can reset assured that any title released for the system will work on the hardware you purchased.
Consoles were once super distinct from PCs, since they had essentially no user-facing operating system. You couldn’t dig into their data management setups, change their network connections, or do anything you take for granted on other platforms, since they didn’t have any such features.
That began to change when consoles became internet-enabled and gained media playback capabilities, with the DVD-playing PlayStation 2 and Ethernet-equipped Xbox perhaps the first real inflection points. Today’s games often require enormous patches or updates to remain playable and secure, as do the system OSes they run on.
Updates are a particular weakness for phone/tablet games. Consider the iPhone: Every single year, it receives multiple new models, with fresh software APIs, updated chips, different screen resolutions/sizes, etc. Like clockwork, the presenters at the Apple keynotes talk about how these new features will make the device “console-level.” Yet iOS and Android are still most synonymous with free-to-play gambling games, which account for enormous amounts of all platform revenue, than with more in-depth gameplay. Why?
I think the endless upgrade cycle is partly to blame. One iOS game developer decided to leave the App Store altogether recently, saying (emphasis mine):
“This year we spent a lot of time updating our old mobile games, to make them run properly on new OS versions, new resolutions, and whatever new things that were introduced which broke our games on iPhones and iPads around the world. We’ve put months of work into this, because, well, we care that our games live on, and we want you to be able to keep playing your games. Had we known back in 2010 that we would be updating our games seven years later, we would have shook our heads in disbelief.”
There’s simply no guarantee that a game developed for any mobile platform will run even a few years later without proactive updates to save it from obsolescence. This issue doesn’t exist as much on consoles (since they are designed to be fixed systems with long lifespans), and especially not on older consoles. I can put a cartridge in a 1998 Game Boy and, barring any electrical or technical issues, be certain it will load and play as intended. I can’t say the same about an iOS game that hasn’t been updated since 2016.
The future of gaming history
The software update issue was raised by a blogger, Lukas Mathis, in a post about the wrongness of various other tech bloggers’ predictions about Nintendo. Between approximately 2011 and 2016, it was very fashionable to proclaim that Nintendo was failing and headed the way of Sega, i.e., toward being a software developer for other people’s hardware, instead of a hardware maker in its own right (Sega exited the console business in 2001, only ten years after its sweeping success with the Sega Genesis). A few choice quotes (all emphasis mine):
John Gruber in 2013, in a post comparing Nintendo to BlackBerry: “No one is arguing that 3DS sales haven’t been OK, but they’re certainly not great…Here is what I’d like to see Nintendo do. Make two great games for iOS (iPhone-only if necessary, but universal iPhone/iPad if it works with the concept). Not ports of existing 3DS or Wii games, but two brand new games designed from the ground up with iOS’s touchscreen, accelerometer, (cameras?), and lack of D-pad/action buttons in mind. (“Mario Kart Touch” would be my suggestion; I’d buy that sight unseen.) Put the same amount of effort into these games that Nintendo does for their Wii and 3DS games. When they’re ready, promote the hell out of them. Steal Steve Jobs’s angle and position them not as in any way giving up on their own platforms but as some much-needed ice water for people in hell. Sell them for $14.99 or maybe even $19.99.”
MG Siegler that same year: “I just don’t see how Nintendo stays in the hardware business. … I just wonder how long it will take the very proud Nintendo to license out their games.”
Marco Arment, responding to Siegler: “I don’t think Nintendo has a bright future. I see them staying in the shrinking hardware business until the bitter end, and then becoming roughly like Sega today: a shell of the former company, probably acquired for relatively little by someone big, endlessly whoring out their old franchises in mostly mediocre games that will leave their old fans longing for the good old days.
There’s endless more material like these pronouncements, all of it built on several (in my opinoin flawed) assumptions about the future of gaming: First, that it will from now on be irreversibly dominated by buttonless pieces of glass (i.e., phone and tablet screens) and the race-to-the-bottom pricing they encourage; second that gaming-specific hardware eventually won’t matter, since everything will be done on general-purpose computing devices; and third that developers like Nintendo can build sustainable businesses selling high-quality games for $20 or less, despite the enormous resources required to make something as daring as Super Mario Odyssey.
If the assumptions are correct, there seems little prospect of even today’s most famous games being preserved as “art,” since they’ll have to be endlessly redeveloped and remonetized to be sustainable. But what if the assumptions aren’t correct? What if mobile no more cannibalizes consoles that PCs did in the 1990s?
The punchline to those quotes is that Nintendo ended up selling 70 million 3DSes (almost on par with the PlayStation4 at the end of 2017) and saw the Switch have the best first year sales of any home console in U.S. history. It accomplished all of that while keeping online functionality and software updates relatively minimal in its first-party titles and going all-in on the bizarre, distinctive hardware of the Switch.
It’s hard to describe what the Switch does if you don’t own one. It’s essentially a console that works like any other, hooked up to a TV, but that can be also picked up and taken with you without any degradation in picture or play quality. It has a touchscreen tablet that can be combined with two hardware controllers with numerous buttons and joysticks (they slot onto the sides of the tablet), or simply used on its own as a Hulu Plus media player.
My first encounter with the Switch had me going back to my phone and thinking of the latter “this feels old.” Perhaps tapping on a phone screen isn’t the “end of history” of video gaming it has sometimes been presented as; maybe there’s a place for more sophisticated hardware after all. I hope so, since the production and preservation of such systems will be crucial if we are to ever have a real “art history” of video gaming.