Weekly Varia no. 17, 03/11/23

This was a big week for me because my first book was officially released. I will have an update on what comes next for my writing soon enough, but, first, I have to get through this semester. This week marked the end of the first half of the spring semester. Flowers are starting to pop up around Kirksville, but I mostly didn’t get to enjoy them because I was busy trying to finish a round of grading so that I had one less thing to do over the next week. I didn’t quite meet my goals because my week filled up with meeting after meeting as everyone tried to squeeze in one more thing before break. Still, I got close enough that I should be able to take a much needed few days off over the next week.

This week’s varia:

  • Pasts Imperfect this week came early to align with Purim. The lead story is Jordan Rosenbaum unpacking the history of Hamantaschen, concluding that the traditional cookie is indeed symbolic, but comes from a different part of the figure of Esther and represents neither Haman nor a hat.
  • Javal Coleman writes in the SCS Blog about being the only Black person in a Classics Department. This is a great piece about belonging and the modern propensity to define black people as outside rather than the ancient tendency for inclusion. I read this when it first came out two weeks ago and meant to include it in a previous wrap-up but failed to do so.
  • Matt Gabriele brings an old blog post to Modern Medieval, in which he critiques the idea of a meaningful distinction between “public” and “academic” scholarship in terms of what we are actually doing (rather than genre conventions and tone). He notes that this is a blog post. from 2015, but is again timely in light of a recent New Yorker story dredging up last year’s controversy about “public history,” which had the former president of the American Historical Association, James Sweet, airing his grievances against trained historians who engage the public online. The piece is not worth linking to, but, like his jeremiads last year in his presidential column in Perspectives, Sweet’s willingness to air his grievances against younger, tenuously-employed generations is a dispiriting omen about the future of the profession given that a) he is hardly the only senior scholar to feel this way, and b) far from confronting the fact that the field is under attack—thus foreclosing an academic home for those people he lamented were simply Tweeting away—it gives more fuel to those people doing the attacking.
  • Bill Caraher weighs in on ChatGPT. I appreciate his willingness to express what he does not know, and see some sense in his suggestion that ChatGPT and similar products might be able to replace remediation for students who understand the material in every way except the writing. I’m not sure I agree in whole, but he’s right that there is a cost for both the student and the teacher when you need to take time doing what is effectively remedial work, and I have often found that campus writing centers are only so helpful when students need this sort of foundational help. He followed it up with a thoughtful post on paywalls, publishing, and AI aggregation.
  • Paul Thomas has a discussion of ChatGPT, but through the lens of citation in the sense that it (and the new I.B. guidelines) has added another layer to the cognitive load that comes with citation. His position here is also rooted in the chaos of trying to teach and unteach nitpicky citation style (rather than hyperlinks, which would only work for some fields, even at a future date), which prompt students to get distracted from the process and meaning of citation in the name of accurate formatting. I’m certainly sympathetic to that frustration.
  • A new study is claiming that there was no exacerbation of mental health crises during the pandemic, which they concluded by excluding from the study lower-income countries or study the effects on younger groups or anyone who was already prone to mental illness. This might be correct within the bounds of the study, but only by generalizing so much that it masks a more accurate representation of what happened. This also might speak to the human capacity for resilience and forgetting. For my part, I’m still waiting for the period of lockdown boredom I was promised.
  • Elon Musk is reportedly planning his own town in Texas. I don’t like giving the man air time, but something about the Wall Street Journal headline (I can’t read the whole part because I’m not a subscriber) touched a nerve. Company towns are not utopias, and we should be very wary of the latest return to a Gilded Age labor environment, alongside…
  • Arkansas became the latest state to facilitate child labor.
  • From NPR, a story about a Medicaid requirement that if a person receiving treatment under the program dies, the state government is supposed to recoup the amount spent from the estate. Some states do this in a pro-forma way and collect almost nothing or set relatively high income thresholds, while states like Iowa contract the task out and aggressively recoup the costs—including by seizing the home. Even with carve-outs for spouses and disabled children that can defer collection, this seems to be an exercise of cruelty in the name of fiscal responsibility.
  • More and more companies are admitting that the recent “emergencies” are excuses to increase prices even when it is not strictly necessary to keep up with rising costs, and prices in these situations tend not to go back down.
  • Silicon Valley Bank, a bank that services many tech startups, collapsed after a panic this week. SVB pursued “Venture Debt,” where provides money for those startups, but the companies were spending much more money than anticipated. Not for nothing, this collapse also follows just a few years after another round of banking deregulation.
  • The BBC has decided not to air and episode from the latest David Attenborough program because it includes themes of environmental destruction and they fear right-wing backlash. Not only is this a travesty, but Attenborough’s work has featured these issues for years, so it isn’t as though this is a new development.
  • RIP Tevye the Milkman.
  • Some Toblerone packaging is going to have to drop the Matterhorn from its packaging because the company is moving part of its production to Slovakia, thus violating Swiss rules on “Swissness.” This AP piece has a neat trivia point, too, that the name is a neologism that blends the founder’s name (Theodor Tobler) with the Italian word for nougat (torrone).

Album of the Week: Moscow Philharmonic, Russian Easter Festival Op. 36 (Tchaikovsky Symphony no. 5 and Rimsky-Korsakov Russian Easter Festival Overture, a.k.a. grading music)

Currently Reading: Dan Saladino, Eating to Extinction; Salman Rushdie, Midnight’s Children (it was a long week of grading)

Hamantaschen two ways: cherry and poppyseed.
Libby in full fighting form (on her back, yelling)

Weekly Varia no. 15, 02/25/23

I looked at my course evaluations again this week. Week six of the semester is a strange time to check evaluations, but I had to compile summary evaluations as part of my annual review. Now, the utility of evaluations are deeply mixed in that they often reflect a combination of what the students believed that they should have earned and how much work they believe that they should have put in to earn whatever grade they did receive. I also find that any course policy that deviates from whatever normative practice the students are familiar with is liable to be met with polarizing opinions, which results in some combination of angry and enthusiastic comments.

My favorite ever comment was from a student who said that they should give me a raise.

Polarizing is how I’d characterize the response to Specifications Grading. A lot of students reported that it was challenging, but in a way that was both fair allowed them to do their best work in learning the material, which is exactly the intent. Others found it grossly unfair, either because they had to put in more work to earn the high grade they wanted or because it “prevented” them from receiving their high grade (presumably because they didn’t want to complete optional revisions).

This has led me to mull over whether Specifications Grading is the best match for any class with papers. I am committed to the system at least for this semester and it undoubtedly results in the students honing their skills. But it also requires me to give copious feedback if I want the students to be able to meet the higher standards in their revisions, and this is hard to do at scale. However, I also don’t want to give back either the expectations for what students should be able to achieve by the end of the course or the flexibility that students unused to my teaching style sometimes find disorienting (yes, the extension is free, there is no trick involved). At the same time, even while acknowledging that no one professor can resolve the deep structural issues that lie behind the student mental health crisis, I hate to feel like I’m contributing to making the problem worse.

Then again, I had a handful of comments that explicitly commented about how I made things better in this respect so I must have done something right.

This week’s varia:

  • Javal Coleman writes in the SCS Blog about being the only Black person in a Classics Department and how that isolation makes one question their belonging.
  • This week in Pasts Imperfect, Matthew Canepa writes about the god Mithra, who will be the subject of an upcoming conference on the deity, along with the usual roundup of projects (including a very exciting mapping project on Cahokia). This conference looks excellent, particularly in its focus on undoing the damage done to our understanding of the god through obsession with identifying a “pure” tradition or conviction about unchanging religions. This was also the focus of Canepa’s excellent monograph, The Iranian Expanse (2018).
  • Arie Amaya-Akkermans writes a letter about the devastation at Antakya. He reports a particularly powerful opinion that the Turkish government will likely rebuild some of the antiquities to demonstrate its diversity and sophistication even while allowing the people to suffer.
  • Another earthquake struck Hatay province, already devastated by the earthquake that killed tens of thousands several weeks ago. I have no words.
  • A school resource officer found a loaded gun in a fourth-grader’s backpack after it was reported by other students, to whom the student was showing the gun.
  • Florida is considering a “Classical” Christian alternative to the SAT, in the latest of DeSantis’ aggressive attacks on education. My worry about this sort of thing isn’t so much that it will work—as long as parents are looking to send their kids to top schools elsewhere in the country, they’ll continue to take whatever tests those schools require, and whether the tests are worthwhile is a separate question—but that the actions of DeSantis and the people around him are rapidly pushing the Overton Window about education in a way that empowers people not just in Florida, but around the country, to indoctrinate and bully students.
  • Roald Dahl’s publisher is aiming to release revised editions of classic books that sand away the rough, insensitive edges to the man’s writing. The move is an entirely absurd reaction to the so-called culture wars, in my view, and disingenuous. Give context to the text as was if you want to account for changes in culture, but moderating everything to obscure an author’s politics and make a cash grab at making a sanitized version for use in school does a service to exactly nobody.
  • The office of Equity, Diversity, and Inclusion at Vanderbilt’s Peabody College used ChatGPT to produce an email sent to students about the Michigan State shooting last week. The message was predictably cold and lacking in any specific guidance on resources to help the students who, unsurprisingly, are not amused. I’m not at all surprised that university departments are using AI this way, given the widespread misconception that AI text generation can replace actual writing even as many of these same schools are considering draconian consequences for students who submit AI-generated work. We’re way past irony on this topic already.
  • The Science Fiction magazine Clarkesworld has been overwhelmed by AI-generated short story submissions—all unpublishably bad. The magazine’s editor Neil Clarke speculates on the reasons for this trend in a blog post that also points out in an update that he suspended accepting submissions while working on a solution since the first three weeks of February saw nearly five times the submissions of January, which itself was twice the volume of December and that had been the highest on record to that point. John Scalzi also points out that SFF magazines are vulnerable because they still pay authors.
  • NPR is the latest journalism platform to announce layoffs, noting a 20 million dollar drop-off in sponsorship revenue and pessimistic outlooks for a bounce-back in funding levels. I have my issues with some of how NPR chooses to cover politics in particular, but it is an absolutely essential part of the journalism apparatus given its mandate to cover events in every state. The erosion of journalism in this country is a disturbing (and accelerating) trend that is already showing consequences in the likes of George Santos.
  • The New Yorker Profiles Itamar Ben-Gvir, the poster-child for Israel’s recent swerve to the hard, hard right and an activist for Jewish extremism. Worrying stuff.
  • After the bizarre saga that is Twitter Blue, Zuckerberg has decided to one-up Elon Musk with a paid subscription plan for Meta platforms for the low, low price of $11.99 a month. Unless you are using Facebook on an iPhone, in which case it’ll be $14.99. This is under the guise of ID-verification systems to help people build their brands. This latest move makes me glad that I deleted my Facebook account more than a decade ago. I still use Instagram, probably more than I should and would miss some interactions if I were forced away but let’s be real: the Instagram timeline is practically useless already. I assume this decision counts on Facebook being indispensable for millions of people, and a go-to platform for many types of interactions—as I have been annoyed by on more than one occasion. At least Meta is actually going to verify identifications.
  • The “He Gets Us” series of commercials touting Jesus’ humble humanity is bankrolled by a right-wing evangelical organization that has donors from the likes of the owner of Hobby Lobby. Unsurprising, but wiping away the patina of respectability and inviting questions about motive.

Album of the Week: Kacey Musgraves, Same Trailer Different Park (2013)

Currently Reading: Dan Saladino, Eating to Extinction (2021)

ChatGPT, Again

I had no intention of returning to the ChatGPT discourse. I said my piece back in December and the conversation seemed to take a histrionic turn around the start of the semester when the course planning period coincided with a wave of op eds that treated AI-writing as an existential crisis.

Moral panics turn tedious in a hurry and I bore easily.

Happily, I think that the discussion is starting to move into a more compelling and, I hope nuanced, direction. In his column at Inside Higher Ed last week, for instance, Matt Reed suggested that there are two issues at play with AI-generated writing. The one, academic dishonesty and how to catch it, receives the bulk of the attention. The other, disinformation and inaccuracies, has to this point received much less attention. In other words, the practical considerations about the norms, expectations, and enforcement of academic transactions are taking precedence over the underlying principles. This sort of priority of course makes sense, as anyone who has worked within the institutions of higher education can tell you, but I also think that it misses that these two issues are inextricably intertwined.

Simply put, I am convinced that ChatGPT specifically, and AI more generally, is a digital and information literacy issue.

Now, I should acknowledge that the stakes involved are more profound outside of the semi-controlled academic context, and at least potentially herald fundamental disruption to existing economic models. Google, for instance, is reportedly treating the chatbot like an existential threat to their hegemony over access to information online. Likewise, AI-generated art is just the latest technology that will allow companies to cut labor costs—why pay artists to create cover-art for a book when you can have an intern churn out AI-generated images until you find one you like? As much as I maintain that AI is a tool and the person producing the art is an artist, companies are not likely to compensate the artist as such under these scenarios. But while both of these are ethical issues related to my point about digital literacy, neither are they wholly new.

When it comes to writing, AI is a tool, and tools are only as good as their users. A spell-Czech [sic] doesn’t provide any value if one doesn’t have the vocabulary to recognize when it misleads, just as gratuitous use a thesaurus can lead the writer astray. Predictive text is fine for email, but I find it distracting in other contexts because the program prompts me down particular lines of composition. And, as I put in the last post on this topic, citation generators will lead you astray if you are unwilling or unable to format the text that it generates.

In this sense, the danger with AI is that people are going to treat a tool for something as a replacement for that thing. But this does not constitute either an existential crisis or a fundamental disruption, despite groups of people treating it as one or the other.

There are a myriad reason that a student might submit an AI-generated essay. Most of these overlap with the reasons a student might purchase an essay or otherwise cheat on assignments, and need to be addressed as such. However, AI should give educators greater pause because, compared to the other forms of dishonesty, AI might give the impression to some students that they don’t need to learn the skill in the first place. Chatbots can give the appearance of engaging with a historical figure, but they do not actually let you converse with that person any more than the Metaverse can allow you to watch Mark Antony debate in Rome in 32 BCE. But that superficial engagement risks drawing people away from the actual substance that would allow the participant to see how the AI turns unredeemed racists into apologists for their heinous beliefs or to recognize that seeing Antony debate in Rome in 32 BCE would be quite a feat because he was in Egypt gearing up for war with Octavian at that time.

On a whim, I decided to ask ChatGPT why students should avoid using the AI to write papers. This was what it produced:

I followed that prompt with a question about whether AI could help students with their writing:

I received a slightly more enthusiastic response when I directly inverted the original prompt, but still as a tool that can make writing easier or more efficient. At my most cantankerous, I dislike several of these uses—text summarization assumes one viable reading that simply isn’t true, which is also my problem with services like Blinkist, and I think that text generation will create pathways that guide how the person writes and thinks about a topic—but I could make similar arguments for writing being shaped by whatever we’re reading and simple reliance on the the first definition of a word found in a dictionary. As I said in my original post, if someone were to use AI as a tool and produce a quality paper either without any further intervention or by editing and polishing the text until it met the standards, that paper would meet my criteria for what I want my students to achieve in the class. This process would not be my preference, but the student would have guided the program through numerous rounds of revision much as they would draft and re-draft any paper that they wrote themselves. So much so that it would be easier to just write the paper, in fact. I doubt that a truly revolutionary thesis could be developed that way, but the student would have demonstrated their mastery of the course material and a sensitive understanding of the writing practices to know that it met standards on my rubric—grammar might be easier to accomplish, but the other categories not so much.

In fact, the arrival of AI makes it all the more important for students to learn skills like reading, writing, and, especially in my discipline, historical literacy. To do this, though, I think it is a mistake to issue blanket prohibitions or build assessment as though it does not exist. Rather, I want students to understand both why AI is not a great choice and what its limitations are, which requires steering into AI, at least a little bit.

This semester I am planning two types of activities, both of which are similar to the suggestions made in an opinion piece published today in Inside Higher Ed.

I scheduled a week for my first year seminar to address their first big writing assignment. The students have no reading this week, during which they will be working on their drafts of their first paper that are due on Friday. In the two class periods earlier in the week, I am going to have them complete an exercise using ChatGPT in their groups for the semester. On Monday, the students will work with ChatGPT to produce papers about the readings that we have covered to this point in the class, sharing with the me the results of the exercise. Then they will be charged with offering a critical evaluation of the generated text, which we will spend time on Wednesday sharing and discussing the critiques with the class, which will segue into a discussion of what makes writing “good.”

Students in my upper-division courses will do a similar exercise. As their first essays approach, I am going to provide students essays produced by ChatGPT using the same prompts and my essay rubric. Their task will be to “mark” the ChatGPT.

The goal is the same in both cases: to remind students that AI has severe limitations that cannot replace their unique thoughts. Further, I aim to engage the students as both writers and editors since I see the latter skill as an essential part of the writing process.

I don’t want suggest a prescriptive advice in this given that my class sizes and teaching mandates allow me to pursue some of these options. But the ChatGPT discourse has made even more convinced that it is necessary to teach basic, foundational, transferrable skills that will empower students to engage responsibly with the world in which they live.

Weekly Varia no. 11, 01/28/23

This was one of those weeks when it felt as though I got nothing done. Everything takes too much time, and then I am pulled in too many directions at once. This is the story of most semesters, if I’m being honest. So I didn’t manage to finish either my academic book for the week or any of the four draft posts in various stages of completion for this site, and I am trying to resist adding anything else to my plate. At this point I would like to focus on making more time for the things that I’m already doing. After all, as Oliver Burkemann argued in Six Thousand Weeks and the late Randy Pausch talks about in his time management lecture, our time is finite so we should pay more attention to how we spend it. Squeezing every last ounce of efficiency or sacrificing sleep (as I have done in the past) on the altar of rat race culture is both not sustainable and means enjoying life less in the meantime.

Admittedly, I am very bad at this. I have too many interests and a bad habit of saying yes to things before considering how much time they will take, but I now recognize this as an issue. I have more thoughts on these issues and their intersection with academic hobbies and living to work, but I’ll save them for a subsequent post. For now, just a range of links from the week.

This week’s varia:

Album of the week: Amanda Shires, My Piece of Land

Currently Reading: Brandon Sanderson, Tress of the Emerald Sea; Rabun Taylor, Roman Builders

What is “the college essay,” or ChatGPT in my classroom

Confession: I don’t know what is meant by “the college essay.”

This phrase has been the shorthand for a type of student writing deployed over the past few weeks in a discussion about the relationship between college classes and AI programs like ChatGPT-3 that launched in November, which I touched on in a Weekly Varia a few weeks ago. These programs produce a block of unique text that imitates the type of writing requested in response to a prompt. In its outline, input/output mimics what students do in response to prompts from their professors.

The launch of ChatGPT has led to an outpouring of commentary. Stephen Marche declared in The Atlantic that the college essay is dead and that humanists who fail to adjust to this technology will be committing soft suicide, which followed on from a post earlier this year by Mike Sharples declaring that this algorithm had produced a “graduate level” essay. I have also seen anecdotal accounts of professors who have caught students using ChatGPT to produce papers and concern about being able to process this as an honor code violation both because the technology is not addressed explicitly in the school’s regulation and because they lacked concrete evidence that it was used. (OpenAI is aware of these concerns, and one of their projects is to watermark generated text.) Some professors have suggested that this tool will give them no choice but to return to in-class, written tests that are rife with inequities.

But among these rounds of worry, I found myself returning to my initial confusion about the nature of “the college essay.” My confusion, I have decided, is that the phrase is an amorphous, if not totally empty, signifier that generally refers to whatever type of writing that a professor thinks his or her students should be able to produce. If Mike Sharples’ hyperbolic determination that the sample produced in his article is a “graduate level” essay is any guide, these standards can vary quite wildly.

For what it is worth, ChatGPT is pretty sure that the phrase refers to an admissions personal statement.

When I finished my PhD back in 2017, I decided that I would never assign an in-class test unless there was absolutely no other recourse (i.e. if someone above me demanded that I do so). Years of grading timed blue-book exams had convinced me that these exams were a mismatch for what history courses were claiming to teach, while a combination of weekly quizzes that the students could retake as many times as they want (if I’m asking the question, I think it is worth knowing) and take-home exams would align better with what I was looking to assess. This also matched with pedagogical commitment to writing across the curriculum. The quizzes provided accountability for the readings and attention to the course lectures, as well as one or more short answer questions that tasked the students with, basically, writing a thesis, while the exams had the students write two essays, one from each of two sets of questions that they were then allowed to revise. Together, these two types of assignments allowed the students to demonstrate both their mastery over the basic facts and details of the course material and the higher-order skills of synthesizing material into an argument.

My systems have changed in several significant ways since then, but the purpose of my assignments has not.

First, I have been moving away from quizzes. This change has been a concession to technology as much as anything. Since starting this system on Canvas, I moved to a job that uses Blackboard and I have not been able to find an easy system for grading short answer questions. I still find these quizzes a valuable component of my general education courses where they can consist entirely of true/false, multiple choice, fill in the blank, and other types of questions that are automatically graded. In upper-level courses where I found the short-answer questions to be the most valuable part of the assignment, by contrast, I am simply phasing them out.

Second, whether as a supplement to or in lieu of the quizzes, I have started assigning a weekly course journal. In this assignment, the students are tasked with choosing from a standard set of prompts (e.g. “what was the most interesting thing you learned this week,” “what was something that you didn’t understand this week form the course material? Work through the issue and see if you can understand it,” “what was something that you learned this week that changes something you previously wrote for this course?”) and then writing roughly a paragraph. I started assigning these journals in spring 2022 and they quickly became my favorite things to grade because they are a low-stakes writing assignment that give me a clear insight into what the students have learned from my class. Where the students are confused, I can also offer gentle guidance.

Third, I have stopped doing take-home exams. I realized at some point that, while take home exams were better than in-class exams, my students were still producing exam-ish essay answers and I was contributing to this problem in two ways. First, two essays was quite a lot of writing to complete well in the one week that I allotted for the exam. Second, by calling it an exam most students were treating it as only a marginal step away from the in class exam where one is assessed on whether they have the recall and in-the-moment agility to produce reasonable essays in a short period of time.

What if, I thought, I simply removed the exam title and spread the essays out over multiple paper assignments?

The papers I now assign actually use some of the same prompts that I used to assign on exams, which were big questions in the field the sort that you might see on a comprehensive exam, but I now focus on giving the students tools to analyze the readings and organize their thoughts into good essays. Writing, in other words, has become an explicit part of the assignment, and every paper is accompanied by a meta-cognitive reflection about the process.

Given this context, I was more sanguine about ChatGPT than most of the commentary I had seen, but, naturally, I was curious. After all, Sharples had declared that a piece of writing it produced was graduate level and Stephen Marche had assessed it lower, but still assigned it a B+. I would have marked the essay in question lower based on the writing (maybe a generous B-), and failed it for having invented a citation (especially for a graduate class!), but I would be on firmer footing for history papers of the sort that I grade, so I decided to run an experiment.

The first prompt I assigned is one that will, very likely, appear in some form or another in one of my classes next semester: “assess the causes underlying the collapse of the Roman Republic and identify the most important factor.” I am quite confident in assigning the AI a failing grade.

There were multiple issues with ChatGPT’s submission, but I did not expect the most obvious fault with the essay. The following text appeared near the end of the essay.

Vercingetorix’ victory was, I’m sure, quite a surprise for both him and Julius Caesar. If I had to guess, the AI conflated the fall of the Roman Republic with the fall of the Roman Empire, thus taking the talking points for the Empire and applying them to the names from the time of the Republic. After all, ChatGPT produces text by assembling words without understanding the meaning behind them. Then again, this conflation also appears in any number of think-pieces about the United States as Rome, too.

But beyond this particular howler, the produced text has several critical issues.

For one, “Internal conflict, economic troubles, and military defeats” are exceptionally broad categories each of which could make for a direction to take the paper, but together they become so generic as to obscure any attempt at a thesis. “It was complex” is a general truism about the past, not a satisfactory argument.

For another, the essay lacks adequate citations. In the first attempt, the AI produced only two “citations,” both listed at the end of the paper. As I tell my students, listing sources at the end isn’t the same thing as citing where you are getting the information. Upon some revision, the AI did manage to provide some in-text citations, but not nearly enough and not from anything I would have assigned for the class.

A second test, using a prompt I did assign based on Rudyard Kipling’s The White Man’s Burden, produced similarly egregious results. The essay had an uninspired, but a mostly adequate thesis, at least as a starting point, but then proceeded to use three secondary sources, none of which existed in the format that they were cited. Unless the substantial C.V. of the well-published scholar Sarah C. Chambers is missing a publication on a topic outside her central areas of research, she hasn’t argued what the paper claims she did.

A third test, about Hellenistic Judea, cited an irrelevant section of 1 Maccabees and a chapter in the Cambridge History of Judaism, albeit about Qumram and neither from the right volume nor with the right information for the citation. You get the idea.

None of these papers would have received a passing grade from me based on citations alone even before I switched to a specifications grading model. And that is before considering that the AI does even worse with metacognition, for obvious reasons.

In fact, if a student were to provide a quality essay produced by ChatGPT that was accurate, had a good thesis, and was properly cited, and then explained the process by which they produced the essay in their metacognitive component, I would give that student an A in a normal scheme or the highest marks in my specs system. Not only would such a task be quite hard given the current state of AI, but it would also require the student to know my course material well enough to identify any potential inaccuracies and have the attention to detail to make sure that the citations were correct, to say nothing of demonstrating the engagement through their reflection. I don’t mind students using tools except when those tools become crutches that get in the way of learning.

In a similar vein, I have no problem with students using citation generators except that most don’t realize that you shouldn’t put blind faith in the generator. You have to know both the citation style and the type of source you are citing well enough to edit whatever it gives you, which itself demonstrates your knowledge.

More inventive teachers than I have been suggesting creative approaches to integrating ChatGPT into the classroom as a producer of counterpoints or by giving students opportunities to critique its output, not unlike the exercise I did above. I have also seen the suggestion that it could be valuable for synthesizing complex ideas into digestible format, though this use I think loses something by treating a complex text as though it has only one possible meaning. It also produces a reasonable facsimile of discussion questions, though it struggles to answer them in a meaningful way.

I might dabble with some of these ideas, but I also find myself inclined to take my classes back to the basics. Not a return to timed, in-class tests, but doubling down on simple, basic ideas like opening student ideas to big, open-ended questions, carefully reading sources (especially primary sources) and talking about what they have to say, and how to articulate an interpretation of the past based on those sources–all the while being up front with the students about the purpose behind these assignments.

My lack of concern about ChatGPT at this point might reflect how far from the norm my assessment has strayed. I suspect that when people refer to “the college essay,” they’re thinking of the one-off, minimally-sourced essay that rewards superficial proficiency of the sort that I grew frustrated with. The type of assignment that favors expedience over process. In this sense, I find myself aligned with commentators who suggest that this disruption should be treated as an opportunity rather than an existential threat. To echo the title from a recent post at John Warner’s SubStack, “ChatGPT can’t kill anything worth preserving.”