I had no intention of returning to the ChatGPT discourse. I said my piece back in December and the conversation seemed to take a histrionic turn around the start of the semester when the course planning period coincided with a wave of op eds that treated AI-writing as an existential crisis.
Moral panics turn tedious in a hurry and I bore easily.
Happily, I think that the discussion is starting to move into a more compelling and, I hope nuanced, direction. In his column at Inside Higher Ed last week, for instance, Matt Reed suggested that there are two issues at play with AI-generated writing. The one, academic dishonesty and how to catch it, receives the bulk of the attention. The other, disinformation and inaccuracies, has to this point received much less attention. In other words, the practical considerations about the norms, expectations, and enforcement of academic transactions are taking precedence over the underlying principles. This sort of priority of course makes sense, as anyone who has worked within the institutions of higher education can tell you, but I also think that it misses that these two issues are inextricably intertwined.
Simply put, I am convinced that ChatGPT specifically, and AI more generally, is a digital and information literacy issue.
Now, I should acknowledge that the stakes involved are more profound outside of the semi-controlled academic context, and at least potentially herald fundamental disruption to existing economic models. Google, for instance, is reportedly treating the chatbot like an existential threat to their hegemony over access to information online. Likewise, AI-generated art is just the latest technology that will allow companies to cut labor costs—why pay artists to create cover-art for a book when you can have an intern churn out AI-generated images until you find one you like? As much as I maintain that AI is a tool and the person producing the art is an artist, companies are not likely to compensate the artist as such under these scenarios. But while both of these are ethical issues related to my point about digital literacy, neither are they wholly new.
When it comes to writing, AI is a tool, and tools are only as good as their users. A spell-Czech [sic] doesn’t provide any value if one doesn’t have the vocabulary to recognize when it misleads, just as gratuitous use a thesaurus can lead the writer astray. Predictive text is fine for email, but I find it distracting in other contexts because the program prompts me down particular lines of composition. And, as I put in the last post on this topic, citation generators will lead you astray if you are unwilling or unable to format the text that it generates.
In this sense, the danger with AI is that people are going to treat a tool for something as a replacement for that thing. But this does not constitute either an existential crisis or a fundamental disruption, despite groups of people treating it as one or the other.
There are a myriad reason that a student might submit an AI-generated essay. Most of these overlap with the reasons a student might purchase an essay or otherwise cheat on assignments, and need to be addressed as such. However, AI should give educators greater pause because, compared to the other forms of dishonesty, AI might give the impression to some students that they don’t need to learn the skill in the first place. Chatbots can give the appearance of engaging with a historical figure, but they do not actually let you converse with that person any more than the Metaverse can allow you to watch Mark Antony debate in Rome in 32 BCE. But that superficial engagement risks drawing people away from the actual substance that would allow the participant to see how the AI turns unredeemed racists into apologists for their heinous beliefs or to recognize that seeing Antony debate in Rome in 32 BCE would be quite a feat because he was in Egypt gearing up for war with Octavian at that time.
On a whim, I decided to ask ChatGPT why students should avoid using the AI to write papers. This was what it produced:
I followed that prompt with a question about whether AI could help students with their writing:
I received a slightly more enthusiastic response when I directly inverted the original prompt, but still as a tool that can make writing easier or more efficient. At my most cantankerous, I dislike several of these uses—text summarization assumes one viable reading that simply isn’t true, which is also my problem with services like Blinkist, and I think that text generation will create pathways that guide how the person writes and thinks about a topic—but I could make similar arguments for writing being shaped by whatever we’re reading and simple reliance on the the first definition of a word found in a dictionary. As I said in my original post, if someone were to use AI as a tool and produce a quality paper either without any further intervention or by editing and polishing the text until it met the standards, that paper would meet my criteria for what I want my students to achieve in the class. This process would not be my preference, but the student would have guided the program through numerous rounds of revision much as they would draft and re-draft any paper that they wrote themselves. So much so that it would be easier to just write the paper, in fact. I doubt that a truly revolutionary thesis could be developed that way, but the student would have demonstrated their mastery of the course material and a sensitive understanding of the writing practices to know that it met standards on my rubric—grammar might be easier to accomplish, but the other categories not so much.
In fact, the arrival of AI makes it all the more important for students to learn skills like reading, writing, and, especially in my discipline, historical literacy. To do this, though, I think it is a mistake to issue blanket prohibitions or build assessment as though it does not exist. Rather, I want students to understand both why AI is not a great choice and what its limitations are, which requires steering into AI, at least a little bit.
This semester I am planning two types of activities, both of which are similar to the suggestions made in an opinion piece published today in Inside Higher Ed.
I scheduled a week for my first year seminar to address their first big writing assignment. The students have no reading this week, during which they will be working on their drafts of their first paper that are due on Friday. In the two class periods earlier in the week, I am going to have them complete an exercise using ChatGPT in their groups for the semester. On Monday, the students will work with ChatGPT to produce papers about the readings that we have covered to this point in the class, sharing with the me the results of the exercise. Then they will be charged with offering a critical evaluation of the generated text, which we will spend time on Wednesday sharing and discussing the critiques with the class, which will segue into a discussion of what makes writing “good.”
Students in my upper-division courses will do a similar exercise. As their first essays approach, I am going to provide students essays produced by ChatGPT using the same prompts and my essay rubric. Their task will be to “mark” the ChatGPT.
The goal is the same in both cases: to remind students that AI has severe limitations that cannot replace their unique thoughts. Further, I aim to engage the students as both writers and editors since I see the latter skill as an essential part of the writing process.
I don’t want suggest a prescriptive advice in this given that my class sizes and teaching mandates allow me to pursue some of these options. But the ChatGPT discourse has made even more convinced that it is necessary to teach basic, foundational, transferrable skills that will empower students to engage responsibly with the world in which they live.