What the $@*! am I doing with social media?

I recently took an impromptu hiatus from Twitter. My account still posted links to the posts that went up here and I periodically dropped in, looked at a few things, retweeted something I liked, and then disappeared again.

This hiatus went on for about a month and a half until I started dipping my toes back into the Twitter stream about a week ago. During that time, the only social media I checked with any regularity was Instagram.

It is hard to pinpoint a single reason why I took this hiatus. This was around the time that Elon Musk made waves by claiming that he wanted to buy Twitter, but, in retrospect, I think something like this had been coming for a while. As I wore down last semester, I found myself spending progressively more time just idly staring as the world seemed to float by on Twitter. Around the same time, the Musk news broke and there were several rounds of outrage and anger that resulted in a lot of people I follow directly yelling or indirectly sniping at each other, all of which was just too much for me to engage with. So I stopped.

Stepping away from Twitter like this was both a relief and disorienting. For a few years now I have gotten a lot of my news from Twitter, which collates articles from far more sources than I otherwise would seek out. At its best, the site functions like an RSS feed curated and commented upon by people I know or would like to know. Not checking Twitter, therefore felt like reducing my awareness of what is happening in the world from a torrent to a trickle.

Of course, that was also why it was a relief. For a few weeks I just let my primary attention be on whatever was going on in the world around me.

However this hiatus also left me reflecting on how I use social media.

These sites allow people to present a curated version of themselves to the world. Some people, I find, do that very well. There are all sorts of people who use Twitter to great effect to share information and articulate points based on their particular areas of expertise–be it academia, politics, journalism, sports, or comedy. While I have certainly done this from time to time, I am generally reticent to assert my expertise in a space where I always feel that there are people who are more qualified on most of what I would want to say, so I usually don’t put myself in this lane. In an earlier phase of my Twitter evolution I used it as an aggregator for interesting articles I would read, but I gave that up both because a lot of the quick share links didn’t work well and because I felt that I wasn’t adding anything by doing this. In recent years I have also noticed that I largely stay away from commenting about things I am watching or (heaven forfend) sports because those things are not sufficiently “intellectual” and “academic.” After all, Twitter is a space that blurs the lines between the personal and the professional and I’m ostensibly on the job market. Should I not curate my persona accordingly?

This leaves me is with an account where I do a lot of retweeting, a decent amount of holding what might be termed water-cooler talk with people in the replies, but comparatively less tweeting of my own.

This is not the case with other sites. On Instagram, the only other site that I use regularly, by contrast, I post pictures of cats, baking experiments, books I’m reading, flowers, and travel (which happens much less frequently than I would like), while I use Instagram stories for memes, jokes, and ephemeral commentary about everything from how starting to run again feels like a psyop against my own body (tricking it into realizing that it can run that distance or speed) to whatever the latest political travesty is unfolding to minor gripes and insecurities about writing. Here, I find the ephemerality of stories, combined with the much smaller audience (I have maybe 6x the number of people who follow my Twitter account, many fewer of whom I know in person) liberating to be more polemical and sarcastic.

Every so often I think about bringing my social media presences into more alignment, which mostly means being more random and less deliberate with what I tweet. What holds me back is the sense that I ought to be curating a persona. Tweeting about all of those other things might be more authentically me, but is it good for my brand? To which the obvious answer is that I’m a person, not a brand—and, ironically, that doing more to cultivate my persona as a baker might actually be good for me down the road.

But for all of this hand-wringing about personal brands, I don’t actually know what mine is. I hope that it includes at least ancient history, books, writing, pedagogy, and bread, but is that a coherent brand? Does it need to be? Do people follow me for a particular type of my posts?

There is a reason I don’t have aspirations to pivot my career to social media management. I even have some choice words for this idea in an upcoming review of The Immortal King Rao.

I want to continue spending less time on social media in aggregate because it is not great for my anxiety and has a way of filling time that I could spend reading, but I am also toying with ways that I might be able to be a little more present on these sites, whether by employing an app that automatically deletes my old Tweets or by managing to convince myself that it is acceptable for academics to acknowledge their “uncouth” interests without losing face. If anyone has suggestions on these issues, I’m open to ideas.

In all likelihood, I will continue to trundle along much as I have, with perhaps a quicker trigger on the mute button to preserve my state of mind. But, then again, there are so many things about the world, both good and ill, that I want to talk about that the answer might be just to do it.

Anyway, have a cat picture.

Some Thoughts on Kennedy v. Bremerton

The conservative majority Supreme Court of the United States is right now in the midst of flexing its power. Today’s release of the 6–3 decision in the Kennedy v. Bremerton School District case struck a nerve with me, even though it is hardly the most destructive in this sequence of rulings—Dobbs v. Jackson Women’s Health Organization, Vega v. Tekoh, and the likely outcome in West Virginia v. EPA are orders of magnitude worse.

In Kennedy v. Bremerton, a high school football coach lost his job for holding post-game prayers at the 50-yard line. What began as a quiet, private prayer gradually became something where he was joined by his players, which prompted the district to step in. Eventually, the school placed the coach on administrative leave and declined to renew his contract for the following year. The coach sued the school district, claiming that they violated his right to religious expression by punishing him for saying these prayers.

I am neither a lawyer nor an expert court watcher, but I wanted to write this post as both a teacher and a former high school athlete.

The coach is of course allowed to say a private prayer, and in this case I am willing to believe the coach that the two students who, he says, ever expressed discomfort with the prayers were not punished for having done so.

(The number of students who we uncomfortable even voicing their concern is the larger problem, but it is hard to prove in the absence of evidence.)

And yet, the details of this case blurred the lines between the behavior of the coach as coach and his behavior as a private citizen. The defense alleged, reasonably, that his conspicuous prayers that took place on the 50-yard line of the field while surrounded by players constituted a space where he was regarded first and foremost as “coach.” Further, he alleges these were voluntary prayers that he did nothing to lead, but there is pictorial evidence where he appears to be doing more than engaging in a private prayer while most of the students were otherwise occupied (as claimed in the case).

I dislike how the coach performatively challenged the school’s instructions to refrain from these activities, but my problem with this ruling is less about specific allegations and protestations about what this coach did and did not do and more about the broad implications of the ruling.

I played baseball and basketball through high school and, at no point that I can recall did my coaches offer a prayer. It is possible that I simply tuned some things out, but I remember awkwardly jumping up and down and barking like a dog before home basketball games. These circles, at least at my high school, were comical imitations of macho pump-up videos organized by players rather than prayer, but I can certainly attest to peer pressure to at least make a show of going along when these activities that have nothing to do with playing the sport seem to become compulsory parts of being part of the team.

Most people did not grow up in small town Vermont—when I happened to be in Texas on July 4 a few years ago and sat through a Christian prayer that led into the fireworks display accompanied by patriotic music. I will admit to laughing a few minutes into the songs when I heard the opening bars to “God Blessed Texas”—and there are a lot of people who feel more pressure from the ambient Christianity around them, whether because it is more aggressively oppressive whether they live, or because their non-Christian religion is a more central part of their identity, or because they are a more identifiably minoritized person.

That is, there are a lot of people with stories about how activities like an optional prayer in team or classroom settings alienates anyone who refuses to participate in that activity, and potentially singles them out for proselytizing, retaliation, or harassment. Whether or not the coach directly participated in those activities, their actions created an environment that enabled them.

The majority opinion in this case, written by Neil Gorsuch, emphasizes that the school infringed upon the coach’s private religious belief in its demands, suspension, and decision not to renew his contract.

(In terms of the outcome, rather than the substance, of the decision, I am particularly struck by the last point—non-renewal might have the same effect as a firing, but the mechanics are not quite the same.)

Gorsuch wrote the opinion to be religiously neutral. (He also seems to misrepresents basic facts about the case, but I digress.) Ostensibly, a Jewish or Muslim coach would have the same freedom to offer a prayer, but the situations are not comparable. The practice in question is explicitly Christian. Even if every religion prayed in the same way—and they do not—it is hard to imagine large numbers of players joining their coach in these moments in this wildly-unlikely hypothetical situation, while it is comparatively easy to imagine their parents asking that such a coach be removed.

But this is also the problem.

Basically every study shows that roughly 70% of people in the United States are some flavor of Christian, with Protestant denominations making up the overwhelming majority of those. The numbers of religiously unaffiliated are on the rise, but some number of those remain broadly Protestant, just without being affiliated with a particular church. Under these circumstances, I think it is all the more important to ensure that people in positions of authority in public institutions—whether coaches or teachers or principals—are not implicitly creating a situation where students feel pressured to either join a religious activity or be singled out by choosing not to join. To do otherwise tacitly puts the state in a position where it is endorsing the dominant religion, whether or not it deliberately chooses to do so. I fear that is the point of this ruling.

As Sonia Sotomayor points out in her dissent, such entanglements are hardly a win for religious freedom:

[This ruling] elevates one individual’s interest in personal religious exercise, in the exact time and place of that individual’s choosing, over society’s interest in protecting the separation between church and state, eroding the protections for religious liberty for all. Today’s decision is particularly misguided because it elevates the religious rights of a school official, who voluntarily accepted public employment and the limits that public employment entails, over those of his students, who are required to attend school and who this Court has long recognized are particularly vulnerable and deserving of protection. In doing so, the Court sets us further down a perilous path in forcing States to entangle themselves with religion, with all of our rights hanging in the balance. As much as the Court protests otherwise, today’s decision is no victory for religious liberty.

N.B. The discussion here is usually pretty light, but I’ve disabled comments on this post anyway because I don’t have the energy to field comments on this topic right now.

The Dinner

One of my favorite things to do when I meet people from foreign countries is to ask them what they think the best novel is from their country. This works almost as well to start a conversation as asking them about their country’s food and is an easy way for me to add interesting volumes to my reading list. A few years ago at a virtual gathering during an online conference I happened to be chatting with someone from the Netherlands who mentioned Herman Koch’s The Dinner as not necessarily the best novel, but as one that was particularly well-received.

A few centuries from now, when historians want to know what kind of crazies people were at the start of the twenty-first century, all they’ll have to do is look at the computer files of the so-called “top” restaurants.

The Dinner is a tidy novel that ostensibly takes place over the course of a single evening, the titular dinner at a fancy restaurant. Serge Lohman, the frontrunner to be the next Prime Minister, arranged this dinner so that he and his wife Babette can discuss some family business with his younger brother Paul and his wife Claire.

Paul narrates the story and is fond of recounting the truism from Anna Karenina that “Happy families are all alike; every unhappy family is unhappy in its own way.”

Lohman’s achievement in The Dinner is found in interrogating the blurred line between those two categories.

Paul can barely stand his brother, who he characterizes as a fraudulent boor. Serge, he thinks, represents much of what is wrong with society. He lacks imagination about food, while also being a wine snob who puts on airs about being an every-man. Similarly, he makes a big deal about how he adopted a son from Burkina Faso, but is entirely oblivious to how his behavior oppresses the citizens in the small French town where he owns a vacation home.

Like all younger brothers, he likes to make his older brother squirm. (Not spoken as an older brother, or anything.)

When the story opens, Paul seems to have a happy family. He and his wife Claire are a loving couple—even if they like to egg on Serge from time to time—and if their son Michel is having a hard go of it lately, well, he’s a teenager. It isn’t as though he’s into drugs. Paul has some sharp, jaded observations about the restaurant and his brother, but he does not, for the most part, vocalize them. Further, he seems genuinely concerned when Babette arrives at the restaurant and seems to have been crying in the car and frustrated with his brother’s superior attitude with the restaurant staff. In short, he seems like a nice enough.

Slowly, these initial impressions are disabused.

It turns out that this family has a nasty secret. Some months ago, video emerged of a brutal attack on a homeless person sleeping at ATM. Two teenagers walking into the ATM first threw objects at the woman, followed by a can of gasoline that erupted into flame and killed her. Nobody was apprehended for the crime, but Paul recognized the two boys: his son Michel and his nephew Rick.

As it happens, this is the family business that Serge wants to discuss—after all, he has a political career to consider. Paul’s instinct is to protect his son, and the only question left is how far he will have to go.

(There is more to the plot, but I’m ending the synopsis here so as to not give away some of the twists in this nasty family drama.)

The strength of the novel is found in the gradual reveal of Paul’s personality and how that shapes the reader’s understanding of the Lohman family. Koch starts Paul as the mild brother of a politician of some renown and slowly peels back that exterior to reveal a monster with vicious ideas and a history of assault. Actions speak for themselves even if he maintains his own moral superiority.

When faced with lower intelligences, the most effective strategy in my opinion is to tell a barefaced lie: with a lie, you give the pinheads a chance to retreat without losing face.

The Dinner can be read in some ways as a metaphor about getting to know someone. Everyone is the protagonist of their own story and many are convinced of their own rectitude. When we meet new people, we only know the face they present to the world and only later learn what type of person we are interacting with. Most of us don’t have nearly such odious skeletons in our closet, but neither are we literary creations.

I ultimately found The Dinner a little bit on the nose in how it revels in this family drama, but it is a tightly-crafted and compelling story that reads very quickly—even if I emerged from it wanting to wash my hands of the entire Lohman clan.

ΔΔΔ

I recently finished Christine Smallwood’s The Life of the Mind, which seemed to draw parallel’s between a miscarriage and being an adjunct professor. While the novel had some uncomfortable observations about being an adjunct, I found the story weighted more toward the miscarriage side. Still, the implications of the comparison are uncomfortable. I also finished Tom Standage’s A History of the World in Six Glasses, which I ultimately found disappointing. It was cute and had some nice anecdotes, but I kept hoping for a stronger argument and kept bumping against implications about, for instance, Western Civilization. By contrast, the first volume of the Saga graphic novel was truly great.

May Reading List and an update on my 2022 reading goal

Back in January I set out a goal to read one article every working day that was not explicitly linked to my research. The idea was that my academic reading had become too narrowly focused on books and thus that I was missing out on some of the richness of the field.

One article shouldn’t be too onerous, I thought. And yet, I found even one article increasingly unmanageable as the semester wore on, particularly when many of the articles that looked interesting (how I tended to choose what to read) were forty or more pages long—or, in some cases, required ILL requests to access them.

I had hoped that my energy for this project would return with the end of the semester, but the reality is that the start of my summer has been characterized by an all-consuming combination of busyness and torpor brought on by the exhaustion of the semester. The five articles I read in May (listed below) turned out to be the last gasps of my semester routine. While I have made good a good start on other reading goals, I have yet to read a single article in June.

In the spirit of doing less, along with a number of more pressing tasks on my to-do list, I am putting this project on hold for the remainder of this summer and will revisit it in the new semester. In the meantime, I’ll keep tracking what I read and consider anything from this summer bonus.

The May List

  • Scott Lawin Arcenas. “The Silence of Thucydides.” TAPA 150 (2020): 299–332.
  • Mira Green. “Butcher Blocks, Vegetable Stands, and Home-Cooked Food: Resisting Gender and Class Constructions in the Roman World.” Arethusa 52, no. 2 (2020): 115–32.
  • Alexandra Bartzoka. “The Vocabulary and Moments of Change: Thucydides and Isocrates on the Rise and Fall of Athens and Sparta.” Pnyx 1, no. 1 (2022): 1–26.
  • David Morassi. “War Mandates in the Peloponnesian War: The Agency of Athenian Strategoi.” GRBS 62, no. 1 (2022): 1–17.
  • Morgan E. Palmer. “Time and Eternity: The Vestal Virgins and the Crisis of the Third Century.” TAPA 150 (2020): 473–97.

Learning to Run Again

This morning I woke up before my alarm. I grabbed my phone to turn that alarm off and checked a few things before getting out of bed. Then I puttered around the house, reading a novel and stretching by turns for a little more than an hour, just long enough to steep and drink a big mug of tea.

Then I laced up my running shoes and set out.

My current bout of running came on about a month and a half ago. I have never been as serious or successful a runner as my father and brothers who for a number of years now have run marathons together, but this is not my first time running. In high school, I would go for runs with my father and ran a few local 5k races. Early in graduate school I tried running again. It was during this period that I reached my longest distances, running about five miles at least once a week and topping out at about eight miles before running into a leg injury. I tried a “run the year” challenge a few years ago and contributed 173 miles to my team’s total, including a few miles when I couldn’t sleep early in the morning while on a job interview. Then injuries. I tried again after the pandemic closed the gym where I exercised. My last attempt, shortly after moving last summer (and, in retrospect, after holding my foot on the accelerator of a moving truck for many hours), ended abruptly with sharp pain in my lower calf less than a quarter mile into a run.

I am a slow runner, particularly these days. I am also not running very far—just a little under two miles today. But this is okay. My focus right now is on form. On my gait, and trying to keep it in line with how I imagine I run barefoot since I have suffered far more injuries while running in shoes than I ever did playing ultimate barefoot, which I did into my 30s. Correlation need not be causation, but so far, so good. I am running slow and careful, and celebrating ending each run for ending uninjured rather than for reaching a particular distance or speed. Those will come, but only if I can stay healthy.

I like the idea of running more than I actually like running. Rather, I would like to like to be someone who likes running, who achieves that runner’s high, who runs an annual marathon. But I spend my runs thinking about how everything hurts and, recently, fretting about whether this footfall will be the the one when something gives out and I have to start over. I can also only compete against myself while running, and pushing myself this way is exactly what I’m trying not to do.

By contrast, I used to play basketball for hours every week. My slowness didn’t matter as much in a confined playing surface where I could change speeds and understand the space. And since I didn’t like to lose, even in a silly pick-up game, I could just lose myself in the game and not think about what hurt.

And yet, running is what I have right now, so running is what I’m doing alongside a daily yoga routine.

My return to running also prompted me to finally pull Christopher McDougall’s Born to Run off my to-read shelf. McDougall describes himself as a frequently-injured runner, so I thought it might unlock the secret to running pain-free. In a way, it might have.

The centerpiece of Born to Run is a 2006 race in Copper Canyon in the Sierra Madre Mountains between a motley crew of American ultramarathon runners, including Scott Jurek, one of the best in the world at the time, and some of the best Rarámuri (Tarahumara) arranged by a mysterious figure called Caballo Blanco (Micah True).

(The race went on to become an annual event, though its founder died in 2012.)

It is an incredible story. Rarámuri runners had made their appearance in ultra-marathon circles at the Leadville 100, a high-altitude ultramarathon in Colorado, in 1993 and 1994. A guide and race director named Rick Fisher rolled up to the race with a team of Rarámuri for whom he was the self-appointed team manager. The Rarámuri runners won both years, setting a new course record in the second race, before deciding that putting up with Fisher’s actions wasn’t worth their participation.

(An article from 1996 in Ultrarunning about a race in Copper Canyon in which True also participated acknowledges Fisher’s “antics,” but points suggests that they didn’t end his relationship with the tribe.)

However, this story is the hook. Born to Run is an extended argument for a minimalist running style that exploded in popularity following its publication. McDougall’s thesis is that modern running shoes, and the industry that is predicated on selling those shoes, causes us to run in ways that cause injuries. This argument is somewhat anecdotal, relying on personal experience and stories of incredible endurance from athletes before the advent of running shoes.

The Rarámuri, whose name means “The Running People,” are exhibit A. The Rarámuri are a tribe that lives in isolated villages deep in the Sierra Madre Occidentals, in the Mexican state of Chihuahua. The terrain makes long-distance travel a challenge, so they Rarámuri run. But they also run for ceremony and sport in a ceremonial ball-game called rarajipara where teams work to kick a ball an agreed upon distance, chasing it down after each kick. All the while, runners wear just a traditional sandal called huaraches.

My own experience with running makes me sympathetic to McDougall’s argument, and I am seriously considering getting a pair of zero-drop shoes and transitioning in this direction for my footwear. However, the more I read about running injuries, the more it seems that the answers might be more idiosyncratic. That is, there is a lot of conflicting evidence. While some studies suggest physiological advantages to barefoot running, others point out that not all barefoot runners run with the same gait. A number of studies suggest that barefoot running has shifted the types of injuries (aided perhaps by people transitioning too quickly) rather than reducing them. I think that barefoot running could be good for me, but all of this makes me think that I shouldn’t ditch the running shoes for every run just yet.

While I was reading Born to Run, a friend suggested that I read Haruki Murakami’s What I Talk About When I Talk About Running, which connects my current focus on running with my ongoing obsession with writing.

In addition to being a novelist, Murakami is a marathoner and triathlete who describes how his goal is to run one marathon a year. This memoir is a collection of essays on the theme of running and training, and, unlike Born to Run, is not meant to be an argument for a particular type of training.

I think that one more condition for being a gentleman would be keeping quiet about what you do to stay healthy.

Nevertheless, I found What I Talk about When I Talk About Running to be particularly inspiring. Murakami is a more successful runner than I ever expect to be, even though I’m only three years older now than he was when he started running. And yet, I found something admirable about his approach. Running, like writing, is just something Murakami does, and he doesn’t think about a whole lot when he is on the road. His goal in running is to run to the end of the course. That’s it. He gets frustrated when he can’t run as fast as he used to, but he is not running to beat the other people, and uses the experience to turn inward.

And you start to recognize (or be resigned to the fact) that since your faults and deficiencies are well nigh infinite, you’d best figure out your good points and learn to get by with what you have.

But it should perhaps not come as a surprise that I highlighted more passages about writing than I did about running, though Murakami makes a case that the is broad overlap in a both a running temperament and a writing one. Both activities require long periods of isolation and where success is not synonymous with “winning.” Doing them is more important than being the best at them.

I don’t think we should judge the value of our lives by how efficient they are.

A useful reminder.

ΔΔΔ

I have had a hard time writing about books recently. Before these two books, I got bogged down in Olga Tokarczuk’s The Books of Jacob, which I am still trying to process, and then read Ondjaki’s The Transparent City, which is a very sad story about an impoverished community in Luanda, Angola. I would like to write about these, but I’m not sure that I have anything coherent to say and June has turned much busier than I had hoped—last week I was at AP Rating in Kansas City, then I wrote a conference paper that I delivered yesterday, and now I’m staring down a book deadline and other writing obligations. By the time I have time, I might be too far removed to come back to those books. I am now reading Christine Smallwood’s The Life of the Mind, which is a novel about adjunct labor and miscarriage in a way that highlights the lack of control in both situations.

Some thoughts on small-screen Star Wars

Star Wars is a story that I simply cannot quit, my thoughts on The Rise of Skywalker notwithstanding.

Perhaps this should be expected. I might have seen the original trilogy once in the past decade and a half, but I watched Return of the Jedi so frequently as a teenager that I can recount verbatim entire scenes from the movie. I had more issues with the prequel trilogy, but that didn’t get in the way of hours of late-night debate about the films when I was in college and I devoured dozens of the now-heretical novelizations.

I was cautiously excited to see the return of Star Wars to the big screen, but, although I acknowledge a myriad of ways in which they are superior movies to the original trilogy, they ultimately didn’t land for me. I thought that the newest trilogy ended up creating super-cuts of the original trilogy that largely created an inescapable loop of scenes and beats from the original trilogy, just with a superficially new set of locations and a somewhat more garbled narrative. Basically, this loop prevented pushing the story in new and interesting ways in any meaningful way. I accepted this as a feature of The Force Awakens, but then it happened again in The Last Jedi and I simply skipped The Rise of Skywalker.

And yet, I have found myself pulled back into the latest batch of small-screen Star Wars stories. At the time of writing this, I have seen both seasons of The Mandalorian, The Book of Boba Fett, and the first four episodes of Obi-Wan Kenobi.

These shows seem more designed for viewers like me, at least on the surface. These are smaller stories by design. I really enjoyed the Space-Western aesthetic of Mandalorian, and the “lone wolf and cub” story arc of season one was appealing even before that cub turned out to be the adorable Grogu. I’d give the season a B/B+. The second season and Boba Fett both had their moments, but I found the stories muddled and uneven.

Which brings me to Obi-Wan. Like these other projects, there are things I like about the series. As much as I was drawn to the Space Western parts of Star Wars, I will admit a little thrill at getting to see the Space Samurai in action again. I also think that the arc that holds the most promise is the internal one of Ben Kenobi himself. We have only ever seen him competent—first as a hotshot padawan, then as a capable general, and finally as a wizened old sage who masterfully uses the force and still goes toe-to-toe with Vader. In this series, Ewan McGregor is playing a man lost. He is a hermit not unlike the one we meet in the original movie, but without any of his surety. He had buried the light sabers and, seemingly, renounced using the force such that, four episodes into a six-episode arc, he is still barely willing to use the simplest little tricks that he used when we first met him. Both the narrative internal to the series and the larger character arc demand that he recovers his mojo before the end of the series, but I quite like the way that the show juxtaposes an isolated and emotionally fragile Jedi with the inchoate but growing resistance to the empire.

But while there are individual aspects of Obi-Wan that I like, I am finding myself questioning what purpose it serves other than as fodder for an insatiable content machine.

In a recent article in WIRED, Graeme McMillan asserted that the fundamental problem with these shows is that they are burdened by the weight of the Star Wars backstory. That is, each story is seemingly approved based on how well it ties back to Ur-text, which, in turn, prevents them from flourishing on their own. We know that Han Solo saved Chewbacca’s life, won the Millennium Falcon from Lando Calrissian, and did the Kessel Run, so we get Solo. We know the rebels stole the Death Star plans, so Rogue One. What happened to Boba Fett after the Sarlaac? There’s a show for that. Ever wonder what Ben was up to while hanging out near Luke on Tatooine? Get ready for Obi-Wan Kenobi.

As McMillan puts it:

By this point, what truly worked about the original Star Wars movies—the awe of invention and discovery, and the momentum of the propulsive storytelling that left details and common sense behind in the rush to get to the next emotional beat—has been lost almost entirely, replaced by a compulsive need to fulfill nostalgia and comfortably mine existing intellectual property. Whereas those first three movies were the Big Bang that started everything and built a galaxy far, far away, what we’re witnessing now is an implosion of fractal storytelling, with each spin-off focusing on a smaller part of the story leading to a new spin-off focusing on an ever smaller part of that smaller part.

I broadly agree with McMillan’s argument, but also think that the root problem is more than just the unwillingness of adults to suspend disbelief—though that might have influenced the short-lived midichlorian fiasco in the prequel trilogies.

What McMillan attributes to “the awe and invention of discovery” and “propulsive storytelling that left details and common sense design,” I would describe as the legendary nature of the story. Lucas took deep inspiration for the original trilogy from the archetypes found in Joseph Campbell’s The Hero With a Thousand Faces, and the trappings of myth and legend go beyond Luke’s heroic journey. I particularly see this in how the original trilogy situates itself within a larger universe with nods and hand waves. We don’t need to see them to know that they exist. They just are. What does it mean that:

General Kenobi. Years ago you served my father in the Clone Wars. Now he begs you to help him in his struggle against the Empire. I regret that I am unable to present my father’s request to you in person, but my ship has fallen under attack, and I’m afraid my mission to bring you to Alderaan has failed.

Doesn’t matter. Waves hand. Move along.

Here’s the problem: legends aren’t well-served by filling in the cracks.

It is one thing to approach a legend from a fresh perspective—the Arthur story from the perspective of Merlin or Morgan or the Theseus story from the perspective of Asterion (the Minotaur). This has been the stock in trade of mythology since antiquity. Legends are fundamentally iterative. But approaching legends this way respects the stories as legends. It doesn’t matter whether the character is familiar when each new story contributes to a polyphonous chorus that defies the logic necessary for a “canonical” story.

By contrast, the current wave of Star Wars projects (and even the prequel trilogy, to an extent) strike me as fundamentally expository. They can be brilliant pieces of cinematography and well-acted (and they often are!), but they are filling in the cracks of the legend and creating new discontinuities in the process. When Vader and Kenobi square off on the Death Star, Vader says “when we last met I was but the learner, but now I am the master.” At the time and through the prequels, this seemed to indicate that they hadn’t met since the events in Revenge of the Sith, but now they fight at least once in the intervening years. This series can only turn out one way if that line is still going to work, but it also spawns a series of follow-up questions that strain disbelief in the original. Similarly, one might ask whether someone is going to completely wipe the memory of young Leia for her to appeal Kenobi on the basis of her father rather than, you know, reminding him that he saved her life once and now she needs his help again.

I am skeptical that either the big or small screen Star Wars will be able to escape this problem. Few of the new characters have been particularly memorable, and most of those that were owed their origins outside of these projects. As McMillan notes, the result has been increasing insularity within the narrative world of Star Wars that relies on familiar names to draw viewers and generally fails to create new characters that can expand and complicate the universe.

All of this stands in contrast to the approach taken in the books set in the untamed wilds of the period after the original trilogy when there was no plan for movies to carry the canonical stories forward. Some of these books are pretty good, some are quite bad, but they collectively built out a rich universe that carried forward the stories of characters from the movies (e.g. Wedge Antilles) while inventing new favorites among both the protagonists (e.g. Corran Horn and the Skywalker children) and the antagonists (e.g. Admirals Thrawn and Daala).

They didn’t worry about filling in the cracks of the legends, but accepted the films as gospel while looking forward to what came next. The result is a series of more compelling questions: how does the Rebel Alliance capture Coruscant (the capitol) when the emperor is dead but his military apparatus is still in place? What would it be like for an alien or woman to rise to the rank of admiral in the notoriously patriarchal and xenophobic imperial navy? What happens when you introduce good guys who for one reason or another dislike Luke Skywalker and Han Solo?

I can understand the reasons why a studio might reject this approach out of hand, of course. For instance, the novels remain deeply reliant on the original characters and there are only so many times that an actor can play the same role. James Bond and comic book characters like Batman, Superman, and Spiderman have survived reboots with different actors, but it has also led to some fatigue with the proliferation of dead parents in an alleyway behind the theater. A closer analogue to Star Wars is its corporate sibling, the Marvel Cinematic Universe, which has not made any attempt to recast Robert Downey Jr.’s Tony Stark and thus is itself at a crossroads. Star Wars can hardly replace the much-missed Carrie Fisher, leaving the studio to rely on de-aging Mark Hammill and producing CGI-renderings of Peter Cushing and Carrie Fisher. But this also leaves Star Wars a fragile shell perpetually at risk of collapsing in on itself. To echo Princess Leia in the film that started it all: the more you tighten your grip sometimes, the more that your objective slips through your fingers.