Making the best use of ChatGPT and other AI writing tools
ChatGPT and its “artificial intelligence” relatives are here to stay – in our lives, in our student’s lives, and in our classrooms. Will AI tools advance our teaching, or complicate it? Both, most likely. As instructors, how should we use ChatGPT, and can we harness it in service to effective and ethical teaching?
ChatGPT and its ilk are both entirely new and same-old, same-old. As instructors, we’ve never before dealt with a tool that can produce human-sounding text, in an instant, in response to almost any question a student might ask. Perhaps it’s no surprise that some folks are suggesting that the written assignment is dead. But the way we teach has been disrupted by new tools for centuries: the printing press, the textbook, the slide rule, the calculator. I’m old enough to remember the angst over the ubiquity of cheap handheld calculators – but mathematics education survived. We’ll survive ChatGPT too; and if we think carefully, we can not just survive, but teach better.
Should AI tools be incorporated into teaching?
We could, of course, simply ban students’ use of AI tools. This may be tempting, but a ban would come with substantial costs. From the student side, an AI ban would mean that students don’t learn how to use these (increasingly ubiquitous) tools well. From the instructor side, every ban comes with a need for policing. Will you redesign assignments so AI use isn’t possible? How? Or will you threaten consequences for students who violate the ban? If the latter, how will you detect violations (existing “AI detectors” work very poorly), and how will you punish them? Is this really how you want to spend your time?
The path forward, I think, is to incorporate AI tools into our teaching – with care to do so effectively and ethically. This needs to start with both instructors and students understanding how these tools work, and thus what they can and can’t do. I wrote about this in August, in the context of using ChatGPT in your own writing. To summarize: ChatGPT and its relatives are “large language models” – statistical models that work rather like powered-up versions of the predictive text feature on your mobile phone. They’re very good at producing text that sounds human-written, because they’ve digested a huge corpus of writing and have modelled which words and phrases tend to follow which other ones. But (so far) they doesn’t know whether the things they say are true, and so frequently “hallucinate” (or more plainly, just make things up – facts, citations, you name it).
Understanding responsibilities when using AI tools
Teaching effectively with AI has to start, then, with making sure students understand their responsibility with its output. An AI tool’s output must be seen as a set of writing suggestions, for the user (in this case, the student) to consider – and the user bears responsibility for the decision to accept a suggestion. This is true for suggestions from classmates, roommates, and clever aunts; it’s equally true for AI. A useful exercise, then, is to lean into this: assign students to ask ChatGPT something, and to fact-check it. Ask them to identify one piece of content that’s correct in ChatGPT’s output, and indicate how they confirmed its correctness; and one piece of content that’s incorrect or misleading, and how they determined that. This kind of assignment can alert them to the dangers of AI tools, equip them to use them well, and get them to practice their critical thinking skills – a win-win-win.
We can generalise that suggested assignment. Coupling AI use with student reflection should be a powerful approach for teaching writing, subject-matter content, research skills, and critical thinking. I can offer some examples. A student who’s just learning disciplinary writing norms might give ChatGPT an informally written draft and ask it for a version in the style of papers in their field – and the student can be asked to annotate the product, explaining why ChatGPT might have made the changes it did. (A note of caution, and opportunity, here: AI tools are very good at reproducing writing patterns typical of the academic literature, and many of those patterns are regrettable. Asking students to reflect on that may be a way to restrain their instinct to produce dense, academic-sounding prose.) Alternatively, a student might be asked to revise a ChatGPT-generated passage (you can provide either the passage or a prompt), annotating their revisions to explain why they adjusted style or amended content.
The common feature through these suggestions is asking students to reflect on what the AI is doing, and what (and why) they’re doing in response. You can encourage this reflection by taking advantage of another feature of AI tools: their speed. ChatGPT can produce two or three alternative passages nearly as fast as it can produce one. Asking students to compare alternatives reinforces the idea that AI tools are best seen as idea generators and sources of suggestions, and teaches the value of careful thought about what makes one suggestion better than another. That’s true, of course, for anything students come across, not just for ChatGPT output.
Ethical use of ChatGPT
If you teach with AI tools, you’ll want to think a bit about the ethics of their use; after all, any tool comes with ethical considerations. Users should be made aware that because AI tools draw on published text that includes gender and other biases, those biases are likely to be perpetuated or even amplified in their output. Students might be guided to look for and comment on these biases. Students might be asked to discuss, in an assignment or in the classroom, what kinds of tool use should be disclosed to an instructor, and what kinds needn’t be – spellcheckers? ChatGPT? Chegg? Finally, AI tools have environmental and social costs. Those include the carbon footprint of the computation they require (although the intensity of this cost is debated); the fact that large language models are trained on corpora of work whose authors aren’t compensated or acknowledged; and the psychological toll on people hired to identify hate speech and other toxic content in those corpora. Instructors teaching with AI tools might pay explicit attention to these issues, and consider the possibility of alternative assignments for students who prefer not to use AI tools themselves.
Does all this sound like a challenge for you, the instructor? That’s because it is. As instructors, we have always had to grapple with new tools and new knowledge. AI tools aren’t different in this regard, but they’re going to be important and they’re fascinating to work with. Let’s accept the challenge.