ytring
What if generative AI is not here to stay?
- Generative AI has been compared to the Concorde. Supersonic commercial flight seemed there to stay, until it became clear it would’t work at scale: too costly, too risky, too environmentally harmful.
The ghost in the machine: - It doesn’t seem smart to only prepare as if generative AI is here to stay, and not also as if it could disappear, professor Baggio warns.
iHUMAN
Dette er en ytring. Innholdet i teksten uttrykker forfatterens mening.
Education should be based on the assumption of an open future, not a determined one where one technology or another is inevitable.
We’re told AI is there to stay. Johan Berg Pettersen writes we should “accept language models” and “everyone who writes should use AI”. Silvija Seres thinks we are entering a new paradigm: Norwegian universities who have started to adopt AI as learning tools are “showing the way”. Michail Giannakos asks us to acknowledge that “AI tools are already tightly integrated” in education and changing how we teach and assess students.
Enig eller uenig?
Send oss din ytring på
We need to remain open to multiple futures. It’s understandable why AI feels inevitable, but history is littered with ‘inevitable’ technologies that weren’t. Generative AI has been compared to the Concorde. Supersonic commercial flight seemed there to stay, until it became clear it would’t work at scale: too costly, too risky, too environmentally harmful.
There are many other cases of technological retreat to choose from if one wants to find illuminating parallels with generative AI. Should we believe that AI is unlike those, that it will be immune from the economical and political uncertainties that currently surround it?
I’m not suggesting that we should positively expect that AI will disappear soon or bet that it will. But we should prepare for an open future where either could happen. Rushing to transform how we teach and assess students — beyond the bare minimum required to discourage large-scale cheating without turning into the police — could be a mistake.
There is no guarantee that larger models necessarily become more reliable, quite the opposite.
We’re already conceding generative AIs a lot, and we’re being very tolerant of their flaws. Are AIs inaccurate? Do they hallucinate? A common response is: “So do we!”. Tu quoque, sapiens. This is a fallacy: as if the fact that people also make mistakes makes mistakes by AI more acceptable. AI doesn’t need to be perfect, but current generative AI is scaling and automating error in ways we don’t fully control or understand. There is no guarantee that larger models necessarily become more reliable, quite the opposite. But as they grow larger, they do appear more capable, and not only at writing superficially good prose.
We’re tempted to think: if AI has gotten better than most of us at X (writing, coding etc.), there’s no point in us doing X without AI, or in having students do X in class or in exams without AI. If we want to do things without AI, we need more classroom discussions, oral exams, and school exams. If AI is here to stay, then one way forward is to go backward: as the technology advances, we retreat. I’m not convinced this is the best way to go. I’m all for in-presence learning and assessment, but let’s not readopt them in panic, for the wrong reasons, and at the expense of other forms of learning and assessment.
This retreat is where we’re conceding AI too much. Assignments and exams that don’t require students to generate text may prevent cheating-by-using-AI, but will also deprive students who would not use AI of the opportunity to write. This is too high a price to pay, and not just for us in the humanities: replace ‘write an essay’ with ‘write code’, ‘analyze data’ or ‘prove a theorem’. If we let essays go because of AI, what’s next in line?
Why would students who have ethical or other objections to AI — some come to our classes eager to discuss all this — enrol in programs that won’t give them a choice to learn to do things without AI? Aren’t we creating the conditions for young AI skeptics to self-select out of an academic system that needs them as much as keen techies?
But given that generative AI is already here, whether or not it’s here to stay, what do we do? It would seem reasonable to follow a few general principles when possible:
- We should not stop doing something only because AI can do it better than most of us.
- Everything we do with AI (that is humanly feasible) we should be able to do without it.
- We should develop our ability to verify and quality-check the outputs of generative AI.
One objection here could be that these principles will prevent us from using AI to its full potential, to do precisely those things that we could not do without it. But this confuses augmentation with dependency: we use GPS to navigate unfamiliar places, but we still teach children to read maps. If generative AI’s permanence is uncertain, maintaining our capacity to do what AI does is prudent risk management, not technological conservatism.
The costs of being wrong about generative AI’s inevitable future far exceed the costs of maintaining ‘redundant’ cognitive capabilities. And if we’re supposed to train students to verify and quality-check the outputs of generative AI, in many cases that will not be too different from teaching them to solve those same problems first-hand.
It doesn’t seem smart to only prepare as if generative AI is here to stay, and not also as if it could disappear. We should continue to teach fundamental skills, even as AI appears to do well at writing, coding, data analysis etc., and we should also assess students on such skills: that cannot always be done orally, in school exams, or in shorter home exams under the benevolent eye of a mild digital surveillance system.
Perhaps training and using frontier models will become too costly. Perhaps the larger and better models will be put to more remunerative use than academic writing, and we’ll be left only with smaller and relatively uninteresting systems. Or regulations will limit generative AI use in many contexts. Or newer technologies will emerge that make generative AI redundant. Students will need to be prepared for those futures as well.
We’re now receding to positions where that kind of robustness is undermined. This is problematic whether generative AI is here to stay or not, but will have the most serious consequences if it’s not.