Everyone is talking about students’ obligations in light of the availability of Generative A.I. (Gen A.I.) services (such as ChatGPT, Bard, etc.) that can write essays, exam answers, and so on.
But what about professors‘ obligations?
People don’t seem to be talking about that very much, or at least not as much. That might be out of an overabundance of respect for Academic Freedom (which in most places allows professors a pretty free hand in how they run their classrooms). Or it might be out of a simple recognition that professors are a hard bunch to “manage” — it’s just hard to tell them what to do, generally. Or, it might just be that the Gen A.I. poses such obvious challenges in terms of student behaviour, that we miss entirely (or, ok, in fairness, merely understate) the extent to which the wide availability of Gen A.I. could — and perhaps should — change how professors do their jobs.
So, what are a professor’s obligations with regard to Gen A.I. and students?
Obligation #1 for professors I think is to learn about the technology. Read about it. Try it out. Of course, the idea of even trying it (as obvious as that may seem) does have opponents. The resistance isn’t grounded in technophobia, but in a recognition that when you use a Gen A.I. tool, you are feeding it data — you are helping train it. And if you’re worried about Gen A.I., and think it generally morally problematic, you may not want to “contribute” to building it, even fractionally. It’s a reasonable concern, but one I think is outweighed pretty substantially by other considerations. In particular, I don’t think instructors can effectively design “ChatGPT-Proof” assignments if they haven’t seen, first-hand, what ChatGPT and other Gen A.I. tools can do.
Beyond educating themselves about Gen A.I., what about educating their students about it? Six months ago, I heard colleagues saying, effectively, “Let’s not mention it, and hope students don’t already know about it.” That might have been plausible back then, but it’s not now. So, yes, profs should indeed talk to their students, both about what the relevant standards of academic integrity are, and about how students can safely use Gen A.I. tools. On that, see my postings: Part 1 and Part 2. Of course, some instructors may disagree with MY take on the issue. That’s fine. But that just bolsters my point: profs should talk to their own students about their own expectations.
Professors also have an obligation to assign evaluations that aren’t just ChatGPT bait: that is, they shouldn’t be assigning essay topics that (as some do) basically beg for A.I.-assisted plagiarism. In this regard, professors need to be thoughtful (and to educate themselves about the capacities of generative AI — see my first point above. ) Gone are the days when a professor could assign a topic anything like, “write a 1,000-word essay on any topic related to the course.” Even “write a 1,000-word essay on X” (where “X” is any reasonably broad topic) is a mistake. ChatGPT (the only Gen A.I. I’ve played with) will make short work of those. And students will know this, and some will be tempted to take shortcuts. Professors shouldn’t tempt their students. Broad essay questions like that were always pretty lazy, in my view, but now they’re utterly unacceptable.
Oh, and at-home tests are to be avoided at all costs, except in the case of sophisticated instructors who have put serious, serious thought into either building ChatGPT-proof questions, or building permitted use of Gen A.I. into the ground rules for the test.
More generally, professors need (and I’m far from alone in saying this) to take a serious look at the grading schemes in their Course Outlines, to make sure that the evaluative instruments (tests, essays, etc.) they assign are really measuring what they intend to measure, and really assuring the learning outcomes they are supposed to. And they should do this keeping in mind the background assumption that some percent of their students will use Gen A.I. if given the chance.
With regard to designing better evaluations and building better grading schemes, it’s also worth considering whether professors have an obligation to go beyond the do-it-yourself method, and to engage with relevant resources at their schools. Some schools have “Learning & Teaching” offices, for example. Others have centralized Academic Integrity offices. Either or both might be a source of insight into how to ChatGPT-proof assignments and grading schemes. Of course, such resources aren’t available at some schools, and some faculty members genuinely won’t need them.
A final note: some people have floated the idea that instructors have, in effect, no obligations with regard to student use of generative A.I. One professor I know of recently tweeted something to the effect that we should all just ignore ChatGPT (etc.) because, after all, our mission is to teach, and those students who use Gen A.I. are simply penalizing themselves by refusing to learn. I get the temptation, but ultimately I think this is untenable. A professor’s job is not just to teach, but also to assess students’ success at learning. Allowing rampant, undirected use of Gen A.I. makes a mockery of grading systems. Of course, some people are OK with that. Most aren’t.
The above is just a start. I’d value input from colleagues (other professors) and from students. Feel free to leave feedback or suggest additional or different obligations, in the Comments section below.