Blog

Production and Consumption

I have a friend from graduate school who lived in terror of one of our professors. I’m only exaggerating a little bit for effect. This professor had a reputation for being particular about grammar and style, and he regularly made graduate students go through each other’s reviews with, as he might say, a fine-tooth red pen. When you didn’t catch enough mistakes in each other’s work, it was an indication that you weren’t reading carefully enough. Sitting through these exercises could be deeply uncomfortable, but the pressure also forced you to become a better writer.

My friend dreaded these sessions, so you can imagine his terror when it came time to submit his thesis. He spent hour after hour combing through his work to root out every grammatical and stylistic misstep he could think of, fretting about what this professor might say. After my friend had passed on the day of his oral defense, that professor came up to him to point out an error on the cover page.

He had misspelled his name.

Not to minimize the stress my friend felt leading up to that moment, typos like these are functionally inconsequential. Even in published work, typographical errors say more about the process of production than they do about the author, and I am generally loathe to bring them up in book reviews unless there are an egregious number or they substantially affect the experience of reading the piece. Obviously, the goal is to have an error-free manuscript, but to typo is to be human.

I also have been thinking about these anxieties again with respect to a writing funk I have been in these past few weeks.

What happened, basically, is that as soon as I returned my copy-edited book manuscript I started to stumble across references to recent scholarship that I ought to have included. These are obviously more serious concerns than typos, but none of these pieces would fundamentally change the argument I make in the book so much as they would have added a bit more nuance to roughly five paragraphs and/or footnotes in a manuscript that eclipsed 100,000 words. And yet, coming across these citations triggered all of my anxieties about where I received my degree and working as an extremely contingent scholar for the last few years. As much as I stand by my work, I have recently been more concerned about how it’ll be received than excited that my first book has a preliminary release date.

(My partner has informed me that I’m not allowed to fret about how the book will be received until after it is released, at which time if the anxiety returns she will direct me to sleep on the porch.)

What I am wrestling with is the difference between consuming things and producing things. Consuming even the densest scholarship is relatively easy, given adequate time and determination. By contrast, producing things is hard. A short article could have taken the author months of reading or excavation, weeks of writing and rewriting, and several rounds of feedback from people at scholars, early readers, and referees. In other words, something that took half an hour to read very likely took the author weeks, and could have literally taken years, for the author to produce. Writing a book, I have found, only magnifies the asymmetry between these two processes.

This is neither a novel observation nor even the first time I have reflected on it. However, the stakes feel higher this time, both because carrying an extended argument across a book-length project requires wrangling many more threads than does making an argument in an article and simply because this is my first book project.

My book will not be perfect. Then again, neither are any of the books I have reviewed, and I have never reviewed a book I truly disliked—while some other books that I think are awful have received broadly positive reviews. All of this is to say that fixating on those handful of pages where I might have done a little more is distracting me from recalling the things that I think I did very well and the places where I think I am making important contributions.

But this anxiety has also had the insidious effect of pulling me away from doing other writing, even in this space. This is a problem because I have a variety of projects I need to finish, but, really, I’d just like to be able to focus on the process again. Perhaps reminding myself of the difference between producing and consuming will do the trick.

What is Making Me Happy: Marcus, from The Bear

Following the model of NPR’s Pop Culture Happy Hour and, to a lesser extent, the Make Me Smart daily podcast, I want to remind myself that there are things that bring me joy. These posts are meant to be quick hits that identify and/or recommend things—usually artistic or cultural, sometimes culinary—that are making me happy in a given week. I am making this quick format an intermittent feature.

This week: Marcus, from The Bear.

Marcus: My first job was McDonalds. You don’t get to be creative, you just work with robots and everything is automatic and fast and easy. I won’t make a mistake again.
Carmy: Yeah, you will. But not ’cause you’re you, but ’cause shit happens.

The Bear 1.5, “Sheridan”

Watching The Bear causes me quite a lot of stress. The show stars Jeremy Allen White as Carmen “Carmy” Berzatto, a rising star of the culinary world who recently returned to Chicago to take over The Beef, an Italian beef sandwich shop after the owner, his brother, committed suicide. Carmy is working overtime just to keep the place afloat while trying to elevate the cuisine, navigating the resistance to change among the existing staff (especially Richie, played by Ebon Moss-Bachrach), tempering the ambition of his new sous chef Sydney (Ayo Edebiri), and, of course, dealing with the loss of his brother.

Something is always going wrong in the restaurant, whether in terms of interpersonal tensions at the worst possible moment or technical failures or a failed health inspection. All of this crests in the seventh episode “Review” where for twenty excruciating minutes you are taken into the absolute chaos of the restaurant. My stress watching this is a testament to the attention to detail brought to the show that brought on flashbacks to my experience managing a restaurant, which I did for a year after college. There are parts of the work that I enjoyed—I really like routines, for instance—but it can be absolute chaos.

The Bear packs an enormous amount into its eight episodes, most of which are less than half an hour long. There is no wasted space. Every moment seems to serve both as a character beat and either a callback to an earlier scene or setting up something that will happen in a later episode, while also packing in a surprising amount of comedy (particularly shout out Edwin Lee Gibson as Ebraheim).

This economy also allows for at least five different characters to carry out their own little arc. Carmy trying to unlearn the toxic lessons drilled into him by abusive chefs and embracing his family trauma is obvious, as are Richie’s gradual setting aside his bluster to acknowledge the depression of divorce and losing his best friend and Sydney’s obvious skill and ambition that push her to repeatedly overreach. But the writers also gave complete arcs to more peripheral characters like Tina (Liza Colón-Zayas) who comes to appreciate what Carmy is trying to do and realize that Sydney is not out to get her.

Of course, my favorite of these stories belongs to Marcus, played by Lionel Boyce.

When we meet Marcus, he is responsible for making the rolls for The Beef, and is the first of the existing staff to take to Carmy’s vision for the restaurant. With a little bit of inspiration from Carmy’s cooking materials and some encouragement, Marcus teaches himself to bake cakes that they add to the menu. Then he wants to make doughnuts. Things go wrong, at times because that is the nature of the show (and life), but he just keeps going.

What I love about this arc is the reverence that it receives from both Lionel Boyce and the show’s creators. Marcus is given an infectious enthusiasm for baking, almost to the point of obsession. Once he asks for sous vide bags for a fermentation experiment even though he has no idea what he is doing. At the same time, while the The Beef as a whole is absolute chaos, the shots of Marcus baking are done in almost absolute silence, leaving him in this island of calm as he goes through the steps and making it that much more jarring when that calm is disrupted.

This entire season of The Bear is great and the show lands whether or not it continues for a second season, but Lionel Boyce’s performance as Marcus is particularly making me happy this week.

The Immortal King Rao

Social media is a topic ripe for storytelling, and anyone who has spent more than a few minutes on a site like Twitter can understand why they often contain at least an undercurrent dystopia even when that is not precisely the genre the author is working in. I have generally enjoyed the novels I have read in this space, including The Start-up Wife and Fake Accounts, both of which came out last year, but I found neither one of them as strong as The Immortal King Rao. Perhaps because Vahini Vara steers harder into our impeding globally-warmed, algorithmic dystopia.

The three timelines in The Immortal King Rao are each narrated by Athena Rao, King’s daughter who has an illicit piece of technology he developed that allows her access to his memories. King Rao died three days ago. Athena is being interrogated.

The first story is that of King Rao as a child in Kothapalli, India, which we are told is the Telugu equivalent of “Newtown.” In 1951, King was born into a Dalit family that became marginally prosperous when they acquired The Grove from a Brahmin family no longer interested in living in this small town. This opportunity allowed Rao’s industrious grandfather to acquire land on which the extended family can operate a coconut growing and processing operation.

This is not to say that the world of The Grove is good. King is the child of a sexual assault, with his mother, Radha, marrying into the Rao family after his father, Pedda, assaulted her, and he is functionally raised by his aunt, Sita, who marries his father after the death of his mother. Likewise, the extended family is frequently dysfunctional, filled with bullies, gamblers, and layabouts, and the choices of the younger generations nearly drive the family to ruin. But it is also here that Rao first develops his understanding of social networks and interpersonal responsibility.

The second story chronologically follows Rao from his arrival in the United States (in Seattle) as an impoverished graduate student through the rise of Coconut, the company he starts with Margaret, the white daughter of his supervisor. Of the three, this is the arc I found least satisfactory, in large part because many of its beats simply fictionalize the growth of major tech companies like Apple and fold the rise of multiple companies into this one. This is arguably a necessary feature of a story that links this Dalit family in India to the dystopian future—after all, the best dystopias are built on the bones of reality—and Vara uses this story to explore the relationship between King and Margaret, but I also found it distinctly limiting, to say nothing of a little bit hand-wavy to get to where the entire world is beholden to the single tech giant Coconut and its “impartial” algorithm.

The third story is that of Athena herself, King’s progeny and greatest experiment. After his fall from grace at Coconut in c.2040, and after the death of Margaret (who was by now his ex-wife), King deactivated his Social, sailed to Blake Island, and set up a little isolated homestead. It was here that Athena, King and Margaret’s daughter by a surrogate mother living on nearby Bainbridge Island, grew up among the orchards of tropic fruits that King imported.

It is in this storyline that Vara imagines a dystopian vision of the future.

And then Hothouse Earth arrived. The wildfires that began in spring and lasted all summer; the droughts that were such old news that they no longer showed up in headlines; each new pandemic beginning just after the previous one was under control.

King’s grand triumph was the creation of a unitary world government enabled by the global reliance on Coconut technology. King creates a new Constitution that is, functionally, techno-socialism. All citizens become Shareholders who collectively own all corporations, all major decisions—from criminal justice to the global curriculum—determined by the Master Algorithm. Instead of money, individual worth would be measured by the “Social Capital” of an individual, as determined by Algo based on one’s intelligence, beauty, and productive value. In short, everyone is an influencer, and since a portion of their Social Capital is “extracted” monthly in lieu of taxation there is an incentive to continue to engage with the platform.

Of course, algorithms are only as good as their inputs.

The truth was that a person’s Social Capital depended almost entirely on the privilege they were broth with, not any effort of their own.

The prior richness of the rich and the poorness of the poor had been grandfathered in the Shareholding system.

Algo didn’t eliminate the existing ills of society, it merely put them behind a veneer of impartiality. If you disagree with this system, your only choice is to opt-out by becoming an “ex” on one of a few designated “Blanklands” that are off the Social. There you could scratch out a living through farming and illicit trade in drugs, sex, and surrogate pregnancy, and the Shareholders didn’t have to deal with your opposition to progress or listen to your doom-filled prognostications about the future.

If the rise of Coconut was the weakest part of The Immortal King Rao for me, the moment when teenaged Athena decides to abandon her father in favor of life among the Exes, scratching out a living on Bainbridge Island, I found the strongest. In addition to meeting a new cast of characters like Elemen, one of the original Exes, this society reveals to Athena nature of the world that her father created. This world might have been designed to bring people closer together in an efficient manner, but ended up breeding disillusion and complacency while the world burned. Opting out might have been the right move, but it condemned the rest of society in the process.

There were parts of The Immortal King Rao that required suspension of disbelief and much of both the optimism about technology and its consequences felt distinctly American, even though a third of the novel is set in India. And yet, I find that the most chilling dystopias are the ones that cut closest to the truth. The idea of a technocratic single world state might be implausible, but a world ostensibly guided by “impartial” algorithms that aren’t impartial, where every job is rebranded with corporate babble (“history teachers” are now “Progress Leaders”), where everyone’s worth is measured in social media clout, and where the next great advance merely replicates the existing social order very much is not.

ΔΔΔ

I recently finished Yōko Ogawa’s The Memory Police and am now reading James Baldwin’s Go Tell it on the Mountain.

What the $@*! am I doing with social media?

I recently took an impromptu hiatus from Twitter. My account still posted links to the posts that went up here and I periodically dropped in, looked at a few things, retweeted something I liked, and then disappeared again.

This hiatus went on for about a month and a half until I started dipping my toes back into the Twitter stream about a week ago. During that time, the only social media I checked with any regularity was Instagram.

It is hard to pinpoint a single reason why I took this hiatus. This was around the time that Elon Musk made waves by claiming that he wanted to buy Twitter, but, in retrospect, I think something like this had been coming for a while. As I wore down last semester, I found myself spending progressively more time just idly staring as the world seemed to float by on Twitter. Around the same time, the Musk news broke and there were several rounds of outrage and anger that resulted in a lot of people I follow directly yelling or indirectly sniping at each other, all of which was just too much for me to engage with. So I stopped.

Stepping away from Twitter like this was both a relief and disorienting. For a few years now I have gotten a lot of my news from Twitter, which collates articles from far more sources than I otherwise would seek out. At its best, the site functions like an RSS feed curated and commented upon by people I know or would like to know. Not checking Twitter, therefore felt like reducing my awareness of what is happening in the world from a torrent to a trickle.

Of course, that was also why it was a relief. For a few weeks I just let my primary attention be on whatever was going on in the world around me.

However this hiatus also left me reflecting on how I use social media.

These sites allow people to present a curated version of themselves to the world. Some people, I find, do that very well. There are all sorts of people who use Twitter to great effect to share information and articulate points based on their particular areas of expertise–be it academia, politics, journalism, sports, or comedy. While I have certainly done this from time to time, I am generally reticent to assert my expertise in a space where I always feel that there are people who are more qualified on most of what I would want to say, so I usually don’t put myself in this lane. In an earlier phase of my Twitter evolution I used it as an aggregator for interesting articles I would read, but I gave that up both because a lot of the quick share links didn’t work well and because I felt that I wasn’t adding anything by doing this. In recent years I have also noticed that I largely stay away from commenting about things I am watching or (heaven forfend) sports because those things are not sufficiently “intellectual” and “academic.” After all, Twitter is a space that blurs the lines between the personal and the professional and I’m ostensibly on the job market. Should I not curate my persona accordingly?

This leaves me is with an account where I do a lot of retweeting, a decent amount of holding what might be termed water-cooler talk with people in the replies, but comparatively less tweeting of my own.

This is not the case with other sites. On Instagram, the only other site that I use regularly, by contrast, I post pictures of cats, baking experiments, books I’m reading, flowers, and travel (which happens much less frequently than I would like), while I use Instagram stories for memes, jokes, and ephemeral commentary about everything from how starting to run again feels like a psyop against my own body (tricking it into realizing that it can run that distance or speed) to whatever the latest political travesty is unfolding to minor gripes and insecurities about writing. Here, I find the ephemerality of stories, combined with the much smaller audience (I have maybe 6x the number of people who follow my Twitter account, many fewer of whom I know in person) liberating to be more polemical and sarcastic.

Every so often I think about bringing my social media presences into more alignment, which mostly means being more random and less deliberate with what I tweet. What holds me back is the sense that I ought to be curating a persona. Tweeting about all of those other things might be more authentically me, but is it good for my brand? To which the obvious answer is that I’m a person, not a brand—and, ironically, that doing more to cultivate my persona as a baker might actually be good for me down the road.

But for all of this hand-wringing about personal brands, I don’t actually know what mine is. I hope that it includes at least ancient history, books, writing, pedagogy, and bread, but is that a coherent brand? Does it need to be? Do people follow me for a particular type of my posts?

There is a reason I don’t have aspirations to pivot my career to social media management. I even have some choice words for this idea in an upcoming review of The Immortal King Rao.

I want to continue spending less time on social media in aggregate because it is not great for my anxiety and has a way of filling time that I could spend reading, but I am also toying with ways that I might be able to be a little more present on these sites, whether by employing an app that automatically deletes my old Tweets or by managing to convince myself that it is acceptable for academics to acknowledge their “uncouth” interests without losing face. If anyone has suggestions on these issues, I’m open to ideas.

In all likelihood, I will continue to trundle along much as I have, with perhaps a quicker trigger on the mute button to preserve my state of mind. But, then again, there are so many things about the world, both good and ill, that I want to talk about that the answer might be just to do it.

Anyway, have a cat picture.

Some Thoughts on Kennedy v. Bremerton

The conservative majority Supreme Court of the United States is right now in the midst of flexing its power. Today’s release of the 6–3 decision in the Kennedy v. Bremerton School District case struck a nerve with me, even though it is hardly the most destructive in this sequence of rulings—Dobbs v. Jackson Women’s Health Organization, Vega v. Tekoh, and the likely outcome in West Virginia v. EPA are orders of magnitude worse.

In Kennedy v. Bremerton, a high school football coach lost his job for holding post-game prayers at the 50-yard line. What began as a quiet, private prayer gradually became something where he was joined by his players, which prompted the district to step in. Eventually, the school placed the coach on administrative leave and declined to renew his contract for the following year. The coach sued the school district, claiming that they violated his right to religious expression by punishing him for saying these prayers.

I am neither a lawyer nor an expert court watcher, but I wanted to write this post as both a teacher and a former high school athlete.

The coach is of course allowed to say a private prayer, and in this case I am willing to believe the coach that the two students who, he says, ever expressed discomfort with the prayers were not punished for having done so.

(The number of students who we uncomfortable even voicing their concern is the larger problem, but it is hard to prove in the absence of evidence.)

And yet, the details of this case blurred the lines between the behavior of the coach as coach and his behavior as a private citizen. The defense alleged, reasonably, that his conspicuous prayers that took place on the 50-yard line of the field while surrounded by players constituted a space where he was regarded first and foremost as “coach.” Further, he alleges these were voluntary prayers that he did nothing to lead, but there is pictorial evidence where he appears to be doing more than engaging in a private prayer while most of the students were otherwise occupied (as claimed in the case).

I dislike how the coach performatively challenged the school’s instructions to refrain from these activities, but my problem with this ruling is less about specific allegations and protestations about what this coach did and did not do and more about the broad implications of the ruling.

I played baseball and basketball through high school and, at no point that I can recall did my coaches offer a prayer. It is possible that I simply tuned some things out, but I remember awkwardly jumping up and down and barking like a dog before home basketball games. These circles, at least at my high school, were comical imitations of macho pump-up videos organized by players rather than prayer, but I can certainly attest to peer pressure to at least make a show of going along when these activities that have nothing to do with playing the sport seem to become compulsory parts of being part of the team.

Most people did not grow up in small town Vermont—when I happened to be in Texas on July 4 a few years ago and sat through a Christian prayer that led into the fireworks display accompanied by patriotic music. I will admit to laughing a few minutes into the songs when I heard the opening bars to “God Blessed Texas”—and there are a lot of people who feel more pressure from the ambient Christianity around them, whether because it is more aggressively oppressive whether they live, or because their non-Christian religion is a more central part of their identity, or because they are a more identifiably minoritized person.

That is, there are a lot of people with stories about how activities like an optional prayer in team or classroom settings alienates anyone who refuses to participate in that activity, and potentially singles them out for proselytizing, retaliation, or harassment. Whether or not the coach directly participated in those activities, their actions created an environment that enabled them.

The majority opinion in this case, written by Neil Gorsuch, emphasizes that the school infringed upon the coach’s private religious belief in its demands, suspension, and decision not to renew his contract.

(In terms of the outcome, rather than the substance, of the decision, I am particularly struck by the last point—non-renewal might have the same effect as a firing, but the mechanics are not quite the same.)

Gorsuch wrote the opinion to be religiously neutral. (He also seems to misrepresents basic facts about the case, but I digress.) Ostensibly, a Jewish or Muslim coach would have the same freedom to offer a prayer, but the situations are not comparable. The practice in question is explicitly Christian. Even if every religion prayed in the same way—and they do not—it is hard to imagine large numbers of players joining their coach in these moments in this wildly-unlikely hypothetical situation, while it is comparatively easy to imagine their parents asking that such a coach be removed.

But this is also the problem.

Basically every study shows that roughly 70% of people in the United States are some flavor of Christian, with Protestant denominations making up the overwhelming majority of those. The numbers of religiously unaffiliated are on the rise, but some number of those remain broadly Protestant, just without being affiliated with a particular church. Under these circumstances, I think it is all the more important to ensure that people in positions of authority in public institutions—whether coaches or teachers or principals—are not implicitly creating a situation where students feel pressured to either join a religious activity or be singled out by choosing not to join. To do otherwise tacitly puts the state in a position where it is endorsing the dominant religion, whether or not it deliberately chooses to do so. I fear that is the point of this ruling.

As Sonia Sotomayor points out in her dissent, such entanglements are hardly a win for religious freedom:

[This ruling] elevates one individual’s interest in personal religious exercise, in the exact time and place of that individual’s choosing, over society’s interest in protecting the separation between church and state, eroding the protections for religious liberty for all. Today’s decision is particularly misguided because it elevates the religious rights of a school official, who voluntarily accepted public employment and the limits that public employment entails, over those of his students, who are required to attend school and who this Court has long recognized are particularly vulnerable and deserving of protection. In doing so, the Court sets us further down a perilous path in forcing States to entangle themselves with religion, with all of our rights hanging in the balance. As much as the Court protests otherwise, today’s decision is no victory for religious liberty.

N.B. The discussion here is usually pretty light, but I’ve disabled comments on this post anyway because I don’t have the energy to field comments on this topic right now.

The Dinner

One of my favorite things to do when I meet people from foreign countries is to ask them what they think the best novel is from their country. This works almost as well to start a conversation as asking them about their country’s food and is an easy way for me to add interesting volumes to my reading list. A few years ago at a virtual gathering during an online conference I happened to be chatting with someone from the Netherlands who mentioned Herman Koch’s The Dinner as not necessarily the best novel, but as one that was particularly well-received.

A few centuries from now, when historians want to know what kind of crazies people were at the start of the twenty-first century, all they’ll have to do is look at the computer files of the so-called “top” restaurants.

The Dinner is a tidy novel that ostensibly takes place over the course of a single evening, the titular dinner at a fancy restaurant. Serge Lohman, the frontrunner to be the next Prime Minister, arranged this dinner so that he and his wife Babette can discuss some family business with his younger brother Paul and his wife Claire.

Paul narrates the story and is fond of recounting the truism from Anna Karenina that “Happy families are all alike; every unhappy family is unhappy in its own way.”

Lohman’s achievement in The Dinner is found in interrogating the blurred line between those two categories.

Paul can barely stand his brother, who he characterizes as a fraudulent boor. Serge, he thinks, represents much of what is wrong with society. He lacks imagination about food, while also being a wine snob who puts on airs about being an every-man. Similarly, he makes a big deal about how he adopted a son from Burkina Faso, but is entirely oblivious to how his behavior oppresses the citizens in the small French town where he owns a vacation home.

Like all younger brothers, he likes to make his older brother squirm. (Not spoken as an older brother, or anything.)

When the story opens, Paul seems to have a happy family. He and his wife Claire are a loving couple—even if they like to egg on Serge from time to time—and if their son Michel is having a hard go of it lately, well, he’s a teenager. It isn’t as though he’s into drugs. Paul has some sharp, jaded observations about the restaurant and his brother, but he does not, for the most part, vocalize them. Further, he seems genuinely concerned when Babette arrives at the restaurant and seems to have been crying in the car and frustrated with his brother’s superior attitude with the restaurant staff. In short, he seems like a nice enough.

Slowly, these initial impressions are disabused.

It turns out that this family has a nasty secret. Some months ago, video emerged of a brutal attack on a homeless person sleeping at ATM. Two teenagers walking into the ATM first threw objects at the woman, followed by a can of gasoline that erupted into flame and killed her. Nobody was apprehended for the crime, but Paul recognized the two boys: his son Michel and his nephew Rick.

As it happens, this is the family business that Serge wants to discuss—after all, he has a political career to consider. Paul’s instinct is to protect his son, and the only question left is how far he will have to go.

(There is more to the plot, but I’m ending the synopsis here so as to not give away some of the twists in this nasty family drama.)

The strength of the novel is found in the gradual reveal of Paul’s personality and how that shapes the reader’s understanding of the Lohman family. Koch starts Paul as the mild brother of a politician of some renown and slowly peels back that exterior to reveal a monster with vicious ideas and a history of assault. Actions speak for themselves even if he maintains his own moral superiority.

When faced with lower intelligences, the most effective strategy in my opinion is to tell a barefaced lie: with a lie, you give the pinheads a chance to retreat without losing face.

The Dinner can be read in some ways as a metaphor about getting to know someone. Everyone is the protagonist of their own story and many are convinced of their own rectitude. When we meet new people, we only know the face they present to the world and only later learn what type of person we are interacting with. Most of us don’t have nearly such odious skeletons in our closet, but neither are we literary creations.

I ultimately found The Dinner a little bit on the nose in how it revels in this family drama, but it is a tightly-crafted and compelling story that reads very quickly—even if I emerged from it wanting to wash my hands of the entire Lohman clan.

ΔΔΔ

I recently finished Christine Smallwood’s The Life of the Mind, which seemed to draw parallel’s between a miscarriage and being an adjunct professor. While the novel had some uncomfortable observations about being an adjunct, I found the story weighted more toward the miscarriage side. Still, the implications of the comparison are uncomfortable. I also finished Tom Standage’s A History of the World in Six Glasses, which I ultimately found disappointing. It was cute and had some nice anecdotes, but I kept hoping for a stronger argument and kept bumping against implications about, for instance, Western Civilization. By contrast, the first volume of the Saga graphic novel was truly great.

May Reading List and an update on my 2022 reading goal

Back in January I set out a goal to read one article every working day that was not explicitly linked to my research. The idea was that my academic reading had become too narrowly focused on books and thus that I was missing out on some of the richness of the field.

One article shouldn’t be too onerous, I thought. And yet, I found even one article increasingly unmanageable as the semester wore on, particularly when many of the articles that looked interesting (how I tended to choose what to read) were forty or more pages long—or, in some cases, required ILL requests to access them.

I had hoped that my energy for this project would return with the end of the semester, but the reality is that the start of my summer has been characterized by an all-consuming combination of busyness and torpor brought on by the exhaustion of the semester. The five articles I read in May (listed below) turned out to be the last gasps of my semester routine. While I have made good a good start on other reading goals, I have yet to read a single article in June.

In the spirit of doing less, along with a number of more pressing tasks on my to-do list, I am putting this project on hold for the remainder of this summer and will revisit it in the new semester. In the meantime, I’ll keep tracking what I read and consider anything from this summer bonus.

The May List

  • Scott Lawin Arcenas. “The Silence of Thucydides.” TAPA 150 (2020): 299–332.
  • Mira Green. “Butcher Blocks, Vegetable Stands, and Home-Cooked Food: Resisting Gender and Class Constructions in the Roman World.” Arethusa 52, no. 2 (2020): 115–32.
  • Alexandra Bartzoka. “The Vocabulary and Moments of Change: Thucydides and Isocrates on the Rise and Fall of Athens and Sparta.” Pnyx 1, no. 1 (2022): 1–26.
  • David Morassi. “War Mandates in the Peloponnesian War: The Agency of Athenian Strategoi.” GRBS 62, no. 1 (2022): 1–17.
  • Morgan E. Palmer. “Time and Eternity: The Vestal Virgins and the Crisis of the Third Century.” TAPA 150 (2020): 473–97.

Learning to Run Again

This morning I woke up before my alarm. I grabbed my phone to turn that alarm off and checked a few things before getting out of bed. Then I puttered around the house, reading a novel and stretching by turns for a little more than an hour, just long enough to steep and drink a big mug of tea.

Then I laced up my running shoes and set out.

My current bout of running came on about a month and a half ago. I have never been as serious or successful a runner as my father and brothers who for a number of years now have run marathons together, but this is not my first time running. In high school, I would go for runs with my father and ran a few local 5k races. Early in graduate school I tried running again. It was during this period that I reached my longest distances, running about five miles at least once a week and topping out at about eight miles before running into a leg injury. I tried a “run the year” challenge a few years ago and contributed 173 miles to my team’s total, including a few miles when I couldn’t sleep early in the morning while on a job interview. Then injuries. I tried again after the pandemic closed the gym where I exercised. My last attempt, shortly after moving last summer (and, in retrospect, after holding my foot on the accelerator of a moving truck for many hours), ended abruptly with sharp pain in my lower calf less than a quarter mile into a run.

I am a slow runner, particularly these days. I am also not running very far—just a little under two miles today. But this is okay. My focus right now is on form. On my gait, and trying to keep it in line with how I imagine I run barefoot since I have suffered far more injuries while running in shoes than I ever did playing ultimate barefoot, which I did into my 30s. Correlation need not be causation, but so far, so good. I am running slow and careful, and celebrating ending each run for ending uninjured rather than for reaching a particular distance or speed. Those will come, but only if I can stay healthy.

I like the idea of running more than I actually like running. Rather, I would like to like to be someone who likes running, who achieves that runner’s high, who runs an annual marathon. But I spend my runs thinking about how everything hurts and, recently, fretting about whether this footfall will be the the one when something gives out and I have to start over. I can also only compete against myself while running, and pushing myself this way is exactly what I’m trying not to do.

By contrast, I used to play basketball for hours every week. My slowness didn’t matter as much in a confined playing surface where I could change speeds and understand the space. And since I didn’t like to lose, even in a silly pick-up game, I could just lose myself in the game and not think about what hurt.

And yet, running is what I have right now, so running is what I’m doing alongside a daily yoga routine.

My return to running also prompted me to finally pull Christopher McDougall’s Born to Run off my to-read shelf. McDougall describes himself as a frequently-injured runner, so I thought it might unlock the secret to running pain-free. In a way, it might have.

The centerpiece of Born to Run is a 2006 race in Copper Canyon in the Sierra Madre Mountains between a motley crew of American ultramarathon runners, including Scott Jurek, one of the best in the world at the time, and some of the best Rarámuri (Tarahumara) arranged by a mysterious figure called Caballo Blanco (Micah True).

(The race went on to become an annual event, though its founder died in 2012.)

It is an incredible story. Rarámuri runners had made their appearance in ultra-marathon circles at the Leadville 100, a high-altitude ultramarathon in Colorado, in 1993 and 1994. A guide and race director named Rick Fisher rolled up to the race with a team of Rarámuri for whom he was the self-appointed team manager. The Rarámuri runners won both years, setting a new course record in the second race, before deciding that putting up with Fisher’s actions wasn’t worth their participation.

(An article from 1996 in Ultrarunning about a race in Copper Canyon in which True also participated acknowledges Fisher’s “antics,” but points suggests that they didn’t end his relationship with the tribe.)

However, this story is the hook. Born to Run is an extended argument for a minimalist running style that exploded in popularity following its publication. McDougall’s thesis is that modern running shoes, and the industry that is predicated on selling those shoes, causes us to run in ways that cause injuries. This argument is somewhat anecdotal, relying on personal experience and stories of incredible endurance from athletes before the advent of running shoes.

The Rarámuri, whose name means “The Running People,” are exhibit A. The Rarámuri are a tribe that lives in isolated villages deep in the Sierra Madre Occidentals, in the Mexican state of Chihuahua. The terrain makes long-distance travel a challenge, so they Rarámuri run. But they also run for ceremony and sport in a ceremonial ball-game called rarajipara where teams work to kick a ball an agreed upon distance, chasing it down after each kick. All the while, runners wear just a traditional sandal called huaraches.

My own experience with running makes me sympathetic to McDougall’s argument, and I am seriously considering getting a pair of zero-drop shoes and transitioning in this direction for my footwear. However, the more I read about running injuries, the more it seems that the answers might be more idiosyncratic. That is, there is a lot of conflicting evidence. While some studies suggest physiological advantages to barefoot running, others point out that not all barefoot runners run with the same gait. A number of studies suggest that barefoot running has shifted the types of injuries (aided perhaps by people transitioning too quickly) rather than reducing them. I think that barefoot running could be good for me, but all of this makes me think that I shouldn’t ditch the running shoes for every run just yet.

While I was reading Born to Run, a friend suggested that I read Haruki Murakami’s What I Talk About When I Talk About Running, which connects my current focus on running with my ongoing obsession with writing.

In addition to being a novelist, Murakami is a marathoner and triathlete who describes how his goal is to run one marathon a year. This memoir is a collection of essays on the theme of running and training, and, unlike Born to Run, is not meant to be an argument for a particular type of training.

I think that one more condition for being a gentleman would be keeping quiet about what you do to stay healthy.

Nevertheless, I found What I Talk about When I Talk About Running to be particularly inspiring. Murakami is a more successful runner than I ever expect to be, even though I’m only three years older now than he was when he started running. And yet, I found something admirable about his approach. Running, like writing, is just something Murakami does, and he doesn’t think about a whole lot when he is on the road. His goal in running is to run to the end of the course. That’s it. He gets frustrated when he can’t run as fast as he used to, but he is not running to beat the other people, and uses the experience to turn inward.

And you start to recognize (or be resigned to the fact) that since your faults and deficiencies are well nigh infinite, you’d best figure out your good points and learn to get by with what you have.

But it should perhaps not come as a surprise that I highlighted more passages about writing than I did about running, though Murakami makes a case that the is broad overlap in a both a running temperament and a writing one. Both activities require long periods of isolation and where success is not synonymous with “winning.” Doing them is more important than being the best at them.

I don’t think we should judge the value of our lives by how efficient they are.

A useful reminder.

ΔΔΔ

I have had a hard time writing about books recently. Before these two books, I got bogged down in Olga Tokarczuk’s The Books of Jacob, which I am still trying to process, and then read Ondjaki’s The Transparent City, which is a very sad story about an impoverished community in Luanda, Angola. I would like to write about these, but I’m not sure that I have anything coherent to say and June has turned much busier than I had hoped—last week I was at AP Rating in Kansas City, then I wrote a conference paper that I delivered yesterday, and now I’m staring down a book deadline and other writing obligations. By the time I have time, I might be too far removed to come back to those books. I am now reading Christine Smallwood’s The Life of the Mind, which is a novel about adjunct labor and miscarriage in a way that highlights the lack of control in both situations.

Some thoughts on small-screen Star Wars

Star Wars is a story that I simply cannot quit, my thoughts on The Rise of Skywalker notwithstanding.

Perhaps this should be expected. I might have seen the original trilogy once in the past decade and a half, but I watched Return of the Jedi so frequently as a teenager that I can recount verbatim entire scenes from the movie. I had more issues with the prequel trilogy, but that didn’t get in the way of hours of late-night debate about the films when I was in college and I devoured dozens of the now-heretical novelizations.

I was cautiously excited to see the return of Star Wars to the big screen, but, although I acknowledge a myriad of ways in which they are superior movies to the original trilogy, they ultimately didn’t land for me. I thought that the newest trilogy ended up creating super-cuts of the original trilogy that largely created an inescapable loop of scenes and beats from the original trilogy, just with a superficially new set of locations and a somewhat more garbled narrative. Basically, this loop prevented pushing the story in new and interesting ways in any meaningful way. I accepted this as a feature of The Force Awakens, but then it happened again in The Last Jedi and I simply skipped The Rise of Skywalker.

And yet, I have found myself pulled back into the latest batch of small-screen Star Wars stories. At the time of writing this, I have seen both seasons of The Mandalorian, The Book of Boba Fett, and the first four episodes of Obi-Wan Kenobi.

These shows seem more designed for viewers like me, at least on the surface. These are smaller stories by design. I really enjoyed the Space-Western aesthetic of Mandalorian, and the “lone wolf and cub” story arc of season one was appealing even before that cub turned out to be the adorable Grogu. I’d give the season a B/B+. The second season and Boba Fett both had their moments, but I found the stories muddled and uneven.

Which brings me to Obi-Wan. Like these other projects, there are things I like about the series. As much as I was drawn to the Space Western parts of Star Wars, I will admit a little thrill at getting to see the Space Samurai in action again. I also think that the arc that holds the most promise is the internal one of Ben Kenobi himself. We have only ever seen him competent—first as a hotshot padawan, then as a capable general, and finally as a wizened old sage who masterfully uses the force and still goes toe-to-toe with Vader. In this series, Ewan McGregor is playing a man lost. He is a hermit not unlike the one we meet in the original movie, but without any of his surety. He had buried the light sabers and, seemingly, renounced using the force such that, four episodes into a six-episode arc, he is still barely willing to use the simplest little tricks that he used when we first met him. Both the narrative internal to the series and the larger character arc demand that he recovers his mojo before the end of the series, but I quite like the way that the show juxtaposes an isolated and emotionally fragile Jedi with the inchoate but growing resistance to the empire.

But while there are individual aspects of Obi-Wan that I like, I am finding myself questioning what purpose it serves other than as fodder for an insatiable content machine.

In a recent article in WIRED, Graeme McMillan asserted that the fundamental problem with these shows is that they are burdened by the weight of the Star Wars backstory. That is, each story is seemingly approved based on how well it ties back to Ur-text, which, in turn, prevents them from flourishing on their own. We know that Han Solo saved Chewbacca’s life, won the Millennium Falcon from Lando Calrissian, and did the Kessel Run, so we get Solo. We know the rebels stole the Death Star plans, so Rogue One. What happened to Boba Fett after the Sarlaac? There’s a show for that. Ever wonder what Ben was up to while hanging out near Luke on Tatooine? Get ready for Obi-Wan Kenobi.

As McMillan puts it:

By this point, what truly worked about the original Star Wars movies—the awe of invention and discovery, and the momentum of the propulsive storytelling that left details and common sense behind in the rush to get to the next emotional beat—has been lost almost entirely, replaced by a compulsive need to fulfill nostalgia and comfortably mine existing intellectual property. Whereas those first three movies were the Big Bang that started everything and built a galaxy far, far away, what we’re witnessing now is an implosion of fractal storytelling, with each spin-off focusing on a smaller part of the story leading to a new spin-off focusing on an ever smaller part of that smaller part.

I broadly agree with McMillan’s argument, but also think that the root problem is more than just the unwillingness of adults to suspend disbelief—though that might have influenced the short-lived midichlorian fiasco in the prequel trilogies.

What McMillan attributes to “the awe and invention of discovery” and “propulsive storytelling that left details and common sense design,” I would describe as the legendary nature of the story. Lucas took deep inspiration for the original trilogy from the archetypes found in Joseph Campbell’s The Hero With a Thousand Faces, and the trappings of myth and legend go beyond Luke’s heroic journey. I particularly see this in how the original trilogy situates itself within a larger universe with nods and hand waves. We don’t need to see them to know that they exist. They just are. What does it mean that:

General Kenobi. Years ago you served my father in the Clone Wars. Now he begs you to help him in his struggle against the Empire. I regret that I am unable to present my father’s request to you in person, but my ship has fallen under attack, and I’m afraid my mission to bring you to Alderaan has failed.

Doesn’t matter. Waves hand. Move along.

Here’s the problem: legends aren’t well-served by filling in the cracks.

It is one thing to approach a legend from a fresh perspective—the Arthur story from the perspective of Merlin or Morgan or the Theseus story from the perspective of Asterion (the Minotaur). This has been the stock in trade of mythology since antiquity. Legends are fundamentally iterative. But approaching legends this way respects the stories as legends. It doesn’t matter whether the character is familiar when each new story contributes to a polyphonous chorus that defies the logic necessary for a “canonical” story.

By contrast, the current wave of Star Wars projects (and even the prequel trilogy, to an extent) strike me as fundamentally expository. They can be brilliant pieces of cinematography and well-acted (and they often are!), but they are filling in the cracks of the legend and creating new discontinuities in the process. When Vader and Kenobi square off on the Death Star, Vader says “when we last met I was but the learner, but now I am the master.” At the time and through the prequels, this seemed to indicate that they hadn’t met since the events in Revenge of the Sith, but now they fight at least once in the intervening years. This series can only turn out one way if that line is still going to work, but it also spawns a series of follow-up questions that strain disbelief in the original. Similarly, one might ask whether someone is going to completely wipe the memory of young Leia for her to appeal Kenobi on the basis of her father rather than, you know, reminding him that he saved her life once and now she needs his help again.

I am skeptical that either the big or small screen Star Wars will be able to escape this problem. Few of the new characters have been particularly memorable, and most of those that were owed their origins outside of these projects. As McMillan notes, the result has been increasing insularity within the narrative world of Star Wars that relies on familiar names to draw viewers and generally fails to create new characters that can expand and complicate the universe.

All of this stands in contrast to the approach taken in the books set in the untamed wilds of the period after the original trilogy when there was no plan for movies to carry the canonical stories forward. Some of these books are pretty good, some are quite bad, but they collectively built out a rich universe that carried forward the stories of characters from the movies (e.g. Wedge Antilles) while inventing new favorites among both the protagonists (e.g. Corran Horn and the Skywalker children) and the antagonists (e.g. Admirals Thrawn and Daala).

They didn’t worry about filling in the cracks of the legends, but accepted the films as gospel while looking forward to what came next. The result is a series of more compelling questions: how does the Rebel Alliance capture Coruscant (the capitol) when the emperor is dead but his military apparatus is still in place? What would it be like for an alien or woman to rise to the rank of admiral in the notoriously patriarchal and xenophobic imperial navy? What happens when you introduce good guys who for one reason or another dislike Luke Skywalker and Han Solo?

I can understand the reasons why a studio might reject this approach out of hand, of course. For instance, the novels remain deeply reliant on the original characters and there are only so many times that an actor can play the same role. James Bond and comic book characters like Batman, Superman, and Spiderman have survived reboots with different actors, but it has also led to some fatigue with the proliferation of dead parents in an alleyway behind the theater. A closer analogue to Star Wars is its corporate sibling, the Marvel Cinematic Universe, which has not made any attempt to recast Robert Downey Jr.’s Tony Stark and thus is itself at a crossroads. Star Wars can hardly replace the much-missed Carrie Fisher, leaving the studio to rely on de-aging Mark Hammill and producing CGI-renderings of Peter Cushing and Carrie Fisher. But this also leaves Star Wars a fragile shell perpetually at risk of collapsing in on itself. To echo Princess Leia in the film that started it all: the more you tighten your grip sometimes, the more that your objective slips through your fingers.

The End of Burnout

Many authors tell people who already feel worn out and ineffectual that they can change their situation if they just try hard enough. What’s more, by making it individuals’ responsibility to deal with their own burnout, the advice leaves untouched the inhumane ethical and economic system that causes burnout in the first place. Our thinking is stuck because we don’t recognize how deeply burnout is embedded in our cultural values. Or else we’re afraid to admit it. Insofar as the system that works people to the point of burnout is profitable, the people who profit from it have little incentive to alter it. In an individualistic culture where work is a moral duty, it’s up to you to ensure you’re in good working order. And many workers who boast of their hustle embrace that duty, no matter the damage it does. In a perverse way, many of us love burnout culture. Deep down, we want to burn out.

I resemble this statement, and I don’t like it.

By the definitions established in Jonathan Malesic’s recent book The End of Burnout, I have never burned out—at least not completely. I have never reached a point of absolute despair that rendered me incapable of going on, which, along utter exhaustion and reduced performance, marks burnout. The other two, however…

I wouldn’t say that I worked hard in high school, at least on the whole. There were projects that I worked at and if something interested me I would work hard, but not so much overall. Midway through my undergraduate career something snapped. Seemingly overnight I became a dedicated, if not efficient student. I divided everything in my world into “productive” activities and unproductive ones and aspired to spend my waking time being as productive as possible. School work obviously counted as productive, but so too did exercise and investing time in my relationships. Spending time not doing things was deemed unproductive.

At first this was innocuous enough. I was young and productive time included fun things, right? My numerous and varied interests led to me to do all sorts of things and I was determined to do them all. By the time the second semester of senior year rolled around this was almost a mania: I was working, running a club, taking a full course load, working on two research projects, and auditing extra classes that just looked interesting to me, as well as exercising and generally spending time on the aforementioned relationships.

At a time when the stereotypical college student develops a case of senioritis, going through the motions while looking forward what was next, I somehow managed to define sleep as “not productive.”

Seriously.

I cringe thinking about it now, but I went through most of a semester averaging about three hours of sleep a night. I don’t think I ever pulled an all-nighter, but most nights I only got one or two hours, going to bed around midnight, getting up at 1:30 so I could grab coffee and food before the late night place closed, work until the gym opened, exercise, shower, go to class, and then either go do homework or go to my shift at work. I would get eight hours or so on Fridays after work and whatever recreational activities I had planned. Several people that I know of had conversations about when I was going to collapse, though not within earshot. It was bad. Trust me when I say that you shouldn’t do this.

According to the journal I kept at the time, under an April entry titled: “I guess I did need to sleep,” I slept for 13 hours straight.

I have never done something this self-destructive since, but there have been numerous times that I have edged in that direction.

  • The year after college I ended up working up to 90 hours a week, often for weeks at a time without a day off until I just couldn’t physically keep it up, at one point sleeping for more than 12 hours and forcing myself to take days off, even if the nature of the job made that difficult.
  • I worked almost 30 hours a week on top of my school responsibilities (a “full” course load and grading for a class) while completing my MA.
  • I nearly lost snapped while completing the work for one of the toughest seminars I took in grad school the week that I was also taking my comprehensive exams.
  • Another semester, while cobbling together jobs as an adjunct, I took on so much work (six classes, one of which was nearly twice as much work as I thought when I accepted it) that I had to stop writing entirely just to stay on top of the teaching.
  • The semester after that I developed (probably anxiety-induced) GERD and broke out in hives.
  • I frequently have to remind myself that taking one day off a week is okay, leave alone two. At least I usually sleep 7–8 hours a night these days.

Lest it sound like I’m bragging, these are not badges of honor. They are symptoms of the perverse relationship with work that Malesic describes, wedded with ambition and an anxiety oscillates between imposter syndrome and a deep-seated fear that I’ll once again become someone who does nothing if I let up even a little. The worst part: my behavior place within systems that celebrate discipline, but it was almost entirely self-inflicted.

However, I have never burned out like Jonathan Malesic.

Malesic had achieved his dream of becoming a tenured professor of religion and living the life filled with inspirational conversations with young people that he imagined his own college professors had lived. But that life wasn’t as great as he imagined. His students were apathetic, the papers uninspired and, at times, plagiarized. There were meetings and committees, and his wife lived in a different state. In short, the job didn’t live up to his expectations, which, in turn, caused his life to fall apart. His job performance lagged. He snapped at students. He drank too much and found himself incapable of getting out of bed. And so, eventually, he quit.

The End of Burnout is an exploration of the forces that caused his disillusion with his job and possible solutions to escape it. Put simply, Malesic’s thesis is that two features of the modern workplace cause “burnout.”

  1. People derive personal meaning and worth from their jobs.
  2. There is a gulf between the expectations and reality of those jobs.

That is, there is a broad expectation in the United States that your job determines your worth to society. This is obviously not true, but it is signaled in any number of ways, from making health insurances a benefit of employment, to looking down on “low status” jobs like food service, to the constant expectation that you ought to be seeking promotion or treating yourself like an entrepreneur. But if your worth is wrapped up in your job, then you might enter with a certain set of expectations that are out of sync with the conditions—doctors who want to heal people and end up typing at a computer all day, or a professor who got into teaching because of Dead Poet’s Society and ends up teaching bored, hungover students in general education classes. On top of it all, the responsibility for “solving” the issue is then passed on to the worker: you’re just not hustling hard enough. Have you tried self-care?

The End of Burnout is a thought-provoking book. Malesic examines the deep historical roots of phenomena that might today be called burnout, discusses the pathology of an ambiguous phenomenon that is likely overused, often pointing to acute exhaustion rather than true burnout, and explores how social pressures (e.g. the moral discourse that equates work with worth) exacerbate the phenomenon before turning to alternate models of work and human dignity.

I picked up the end of Burnout for a few reasons.

Most obvious, perhaps, is my toxic relationship with work, as outlined above, to the point where I thought that I had burned out on multiple occasions. Based on the descriptions Malesic provides, I was usually acutely exhausted rather than truly burned out, with the result that, at least so far, I have always been able to bounce back with a few weeks or months of rest.

(The one exception might be the restaurant work straight out of college, but even that did not stop me from working in another franchise in the same chain for two more years while attending school.)

Cumulative exhaustion can lead to burnout, but I came away unconvinced that I have even really been walking down that path. I have been frustrated, of course, and can tell that I am creeping toward exhaustion when I start excessively doom-scrolling on Twitter, but I did not relate to the sheer disillusionment Malesic described. When I have considered other employment options over the past few years, it has always been because of a dearth of jobs.

The main difference, at least to this point, is that I have never viewed this job through rose-colored glasses. Writing about history is something I see as a vocation, but I have approached the teaching and associated work as a job, albeit one that aligns with those other aspects of my life and thus is more enjoyable than some of the others I have had.

At the same time, I have noticed a shift in my relationship to hustle culture now that I am in my mid-30s. I still work hard and have certain ambitions, but increasingly, they are around finding ways to spend my time reading, writing about things I find interesting and important—and having employment with enough security, money, and free-time to do that.

Likewise, the idea of treating oneself as an entrepreneur, which Malesic identifies as an element connecting worth to employment, has always left a sour taste in my mouth. When people tell me that I could (or should) open a bakery, I usually shrug and make some polite noises. I have managed a restaurant in my life and have very little interest in doing so again. I bake because I like the process and enjoy cooking for people I like, not because I want to turn it into a business with all of the marketing, bookkeeping, and regulations that would entail.

(I have also considered trying to turn my writing into a subscription business, but I find that incompatible with the writing I do here. If I made a change, it would involve some sort of additional writing with a regular and established schedule—say, a monthly academic book review for a general readership with a small subscription fee designed to cover the cost of the book and hosting. A thought for another day.)

However, I also picked up The End of Burnout because I am worried about the effect that this culture has on my students. Nearly every semester I have one or more students who report losing motivation to do their work. This past semester one student explained it as a matter of existential dread about what he was going to do with his degree, but it could just as easily be anxiety or concern over climate change or the contemporary political culture or school shootings.

I have long suspected what Malesic argues, that burnout is systemic. In a college context, this is why I get frustrated every time a conversation about mental health on campus takes place without addressing those systemic factors. Focusing on the best practices and workload for an individual class is (relatively) easy, but it is much harder to account for how the courses the professor is teaching or the students are taking interact with each other. I am absolutely complicit in this problem. One of my goals for next academic year is to reexamine my courses because the reality is that the most perfect slate of learning assessments is meaningless if the students end up burned out. I can’t fix these issues on my own, but Malesic’s book brought into greater focus why I need to be part of the solution for my own sake and my students’. I don’t ever want to let one of my students make the mistakes I did when I was their age, which probably explains why the most common piece of advice I give is “get some sleep,” and I can’t help them if I am also in crisis.

The back part of The End of Burnout turns to possible solutions. Perhaps unsurprisingly given his background as a professor of religion, this discussion frequently focused on groups with a Christian bent. He spends a chapter, for instance, talking about how various Benedictine communities apply the Rule of St. Benedict to tame the “demon” of work. Some groups strictly follow the Rule, limiting work to three hours so that they can dedicate the rest of their lives to what really matters, prayer. Other groups, like several in Minnesota, were less rigid, but nevertheless used similar principles to divorce work and worth, and allowing one’s service to the larger community change with time.

The other chapter in this section was more varied, and included useful discussion from disability activists, but it also featured a prominent profile of Citysquare, a religious-based Dallas non-profit that uniquely humane policies around work expectations and support for its staff. These examples sat awkwardly with my agnostic world view, as someone who believes that we should be able to create a better society without religion, and particularly without Christianity. However, Malesic’s underlying point is not that we ought to all follow the Rule of St. Benedict. Rather, he makes a case that each profile in its own way can help imagine a culture where the value of a person is not derived from their paycheck (or grade).

To overcome burnout, we have to get rid of the [destructive ideal of working to the point of martyrdom] and create a new shared vision of how work fits into a life well lived. That vision will replace the work ethic’s old, discredited promise. It will make dignity universal, not contingent on paid labor. It will put compassion for self and others ahead of productivity. And it will affirm that we find our highest purpose in leisure, not work.

Malesic’s vision here is decidedly utopian and hardly new, and his warnings about the consequences of the automating workplace are a modern echo of 19th century choruses. But the ideals he presents are worth aspiring to nonetheless. As long as we work within a depersonalizing, extractive system that treats people as interchangeable expenses against the company’s bottom line, then that system will not only continue to grind people down and spit them out, but also contribute to nasty practices elsewhere in society like treating food service workers with contempt. Severing the connection between personal worth and paid work won’t solve every problems, but it is a good place to start.