Brian Redekopp new profile pic
Humanities, Philosophy

Teaching With and Around ChatGPT

By Brian Redekopp
Cohort 2022-2023

Introduction

Open AI’s release of ChatGPT in November 2022 has thrown us, particularly those of us teaching in the social sciences and humanities, into something of a crisis: How can we continue to help students learn through writing in a world where AI can do their writing assignments for them?

Over the past few months I’ve had the privilege of leading ChatGPT workshops with Robert Stephens, Daniel Goldsmith, and Jeffrey Gandell, where the committed faculty here at Dawson have been discussing how to deal with this massive challenge.

Inspired by these conversations, here are some ideas and resources I’ve developed both for coping with and for taking advantage of AI text-generation technology in my courses.

Understanding What LLMs Do

As large language models (LLMs) like ChatGPT rapidly become more and more sophisticated, it’s crucial that students develop a critical understanding of how to use them ethically and effectively. This in turn requires a basic grasp of how they work.

Recently Sarah Allen referred me to an excellent New Yorker article entitled “What Kind of a Mind Does ChatGPT Have?” It provides an accessible yet technically informative explanation of LLMs. Here is the text of the full article; in the comments are ideas for class activities to help teach the key concepts.

Conversing with ChatGPT gives the almost irresistible impression that one is dealing with some sort of consciousness, or at least with something that understands what it is saying. But is this really so? According to philosopher John Searle’s famous “Chinese Room” argument, no AI that generates language in the manner of an LLM can properly be said to understand anything. Searle’s argument can serve as a thought-provoking way to introduce or complement class discussions on how ChatGPT works and the ethical and social issues it raises. Here is the argument excerpted from Searle’s original 1980 article, and here is a great short video by the mathematics popularizer Marcus du Sautoy that brings the argument to life. For a deeper dive into the philosophical debate around the argument, this article from the Stanford Encyclopedia of Philosophy provides a nice overview.

Finally, to help students develop a more general level of technological literacy, here are slides for two classes on algorithms. (The first class covers what algorithms are and what they can do; the second class, which is a bit more mathematical, is on their limits.) The slides work in conjunction with du Sautoy’s highly engaging 2017 documentary The Secret Rules of Modern Living: Algorithms.

Enhancing a Course with ChatGPT

 There’s no doubt that with its impressive capacity to generate accurate, human-sounding text on virtually any topic in virtually any style, ChatGPT poses a major threat to academic integrity. Yet what makes it such a threat is what also makes it such an exciting pedagogical tool.

One way to use ChatGPT as a study tool is to have it generate answers to test questions and have students critique these answers. In Winter 2023 I tried this in Principles of Math and Logic, a course in Liberal Arts. Students spent a class grading ChatGPT-generated answers to exam review questions (I generated the answers in advance to avoid technical glitches and to ensure they were all grading the same answers.) In the next class we discussed their work and I showed them my own evaluation of ChatGPT’s answers (it failed the logic test). By discussing why ChatGPT does certain things well and others badly, this sort of exercise can also serve as a way of reinforcing students’ understanding of how LLMs work.

Another use of ChatGPT with great pedagogical potential is as a tool for role-play. One can prompt the bot to take on the persona of an author, character, historical figure, etc. and then converse with it in order to explore material in a fun, personal, and interactive way. However, given that an LLM performs only as well as its training data permits, ChatGPT may well play its role superficially, or worse, inaccurately. So activities or assignments using this sort of role-play should always include a critical component.

Another way to role-play with ChatGPT is to have students take on a persona themselves. To make such an exercise pedagogically focused, one can first discuss with students the positions, arguments, personality, typical rhetorical moves, etc. of the persona they will take on; an assignment of this form could be graded according to how well the student captures these in a dialogue they create with ChatGPT.

To test the potential of this sort of role-play, I tried impersonating Socrates in a dialogue with ChatGPT about intelligence. One thing led to another, resulting in an elaborate parody of a Platonic dialogue in which ChatGPT displays remarkable philosophical acumen, a sense of humour, and a creepy transhumanist vision of the future. Making the dialogue was a uniquely challenging and enriching creative experience, and it left me feeling much more positive about the impact AI can have on education.

Designing Writing Assignments that Circumvent ChatGPT

While ChatGPT can enhance a course in exciting ways, the problem remains of what to do about writing we want students to do on their own. There is no easy answer, but part of a solution is to design assignments that decrease the motivation and the opportunity to resort to AI. Given the serious shortcomings of AI-text detectors like Turnitin’s (see below), it seems to me that writing assignments need to be completed in the classroom, at least in their core aspects. This can pose difficult logistical challenges, but also an opportunity to coach students through the process of writing, which is much more pedagogically important than the product they would otherwise (somehow) produce at home on their own.

For my Introduction to Philosophy course last semester I replaced the out-of-class final paper with a two-part final exam. The first part was a short-answer test designed to consolidate understanding of the material. In the next class, students began preparing for the second part, which consisted of writing a philosophical meditation in the style of Descartes. After I explained the format and grading criteria, students spent the rest of the class period writing a practice meditation. (To my pleasant surprise they wrote quite furiously even though this practice version was not for marks; just knowing that it was practice for the real deal was sufficient motivation.) In the following class I returned their meditations with comments and we discussed how to improve. Finally, in the last class of the semester they wrote the graded meditation; this had the same format and grading criteria as the practice version, but in response to a different question, not provided in advance.

My biggest take-away from this experiment was the importance of practice (the “formative assessment”), not only for pedagogy but also as a way of removing the incentive to resort to ChatGPT. Perhaps the integrity of out-of-class writing assignments can be preserved by making them low-stakes practice exercises for in-class evaluations.

Can We Reliably Detect AI-Generated Writing?

Our lives as educators in the age of AI would be so much easier if only we had a tool that could reliably identify AI-generated writing. But current detectors, including Turnitin’s, are easily evaded through paraphrasing, are biased against non-native English writers, and inevitably run the risk of false positives. For more details, here are some slides that provide an overview of the current state of AI-detection technology.

Rising to the challenges posed by ChatGPT requires reflecting on some fundamental pedagogical questions. What does it mean to write? What do we expect students to learn through writing, and is writing the best way to learn it? How can we design courses and assignments to help motivate students to write and think for themselves?  If we’re willing to embrace these questions and take advantage of the exciting pedagogical potential of tools like ChatGPT, AI really can be a force for good in education.