For decades, technologists imagined teaching machines. Instead, although edtech is indeed permeating classrooms, human teachers have remained in demand. And then came generative AI…
At Rest of World, Laura RodrÃguez Salamanca explores AI’s impact in rural Colombia classrooms since Meta added AI bots to WhatsApp, Instagram, and Facebook and made copying and pasting answers frictionless. Result: first, a big leap in the quality of homework, then kids failing exams.
From a tiny set of conversations, it seems little different in the UK. Underlying is one of those existential questions: what is education for? For many of today’s kids, it’s just a series of hoops to jump through rather than something to love for itself. The result, says a teacher friend, is enormous amounts of pressure on kids from all sides.
“Kids are breaking under the pressure,” she says, adding that they are burdened with far more work than in previous generations. “There’s much less time for discussion or being a human. It’s all about learning to write an essay for maximum marks.” Small wonder if they are attracted to shortcuts.
A university lecturer tells me that at his institution there’s a general argument that AI is part of the world and students should know how to use it productively, but little guidance on acceptable use. Recently, he tried letting students use AI as a critical thinking exercise, focusing on a historical event whose cause is not definitively known. The results were disappointing, as he found it hard to get the students past what the AI said. One student did read a paper the chatbot recommended, but lacked the basic textbook knowledge to recognize that the paper was wrong.
“It’s an ongoing problem, and not that different from Google Scholar or PubMed,” he says.
Thirty years ago, there was a plagiarism panic, as students discovered all the material they could copy from the Internet at large. Kids I spoke to then sounded just like an annoyed university student friend now: people who use these shortcuts are cheating themselves out of their education.
There is some research to support this view. At the MIT Media Lab, Nataliya Kos’myna finds that using generative AI for essay-writing correlates to lower engagement to the point that users “struggled to accurately quote their own work”.
Of course, even before that, student clubs kept copies of old exams, or cribbed from the translations readily found in library stacks. My teacher friend thinks the difference is significant: “They were still engaging with the material to a degree you don’t have to with ChatGPT”. I tell her the story that sparked my interest at the time: a US professor had received a paper about a student’s religious faith and their struggle when deciding to have an abortion – submitted by a male student.
As a counter, she points out that led to services like Turnitin, long widely used to check for copying. “The Internet has made plagiarism a lot easier to detect.” But, she says, chatbots’ output passes the plagiarism checkers. Those are now in an arms race to detect generative AI while it keeps improving.
My university student friend nonetheless finds fellow students using chatbots to generate text, which is against her university’s rules (they do allow students to use chatbots to find citations). In her observations, students are more likely to get away with it for short answers where longer ones are more likely to get flagged. Similarly, in small seminars it’s harder to use chatbot output without being caught; it’s easier to get away with it in larger classes. She also sees it more in subject areas like business, accounting, and economics, where the degree is meant to lead directly to a job.
She finds it surprising. “I don’t understand the point in an academic setting. Why waste the opportunity when you’re the one who will have to pay the student loans?” In her only attempt, she tried to get the chatbot to generate vocabulary flash cards: “There was missing information and some were wrong.” She found it quicker to make her own.
It’s harder for her to suggest what universities should do about it. “There’s a drought of [valuing learning for its own sake] in general. A lot go only because their parents expect them to.”
Like plagiarism detectors, teachers are trying to adapt. In the Rest of World article, RodrÃguez Salamanca profiles a teacher who now builds classroom debates around hyperlocal topics unlikely to feature in large language models. In a UK university setting, however, assessing students based on oral debate poses problems: the potential for bias, the need to accommodate non-native speakers and those who have come out of different education systems, and differing cultural norms around classroom behavior. After covid began, many exams shifted to open book; the arrival of chatbots has led my university contact to try to set questions that force the use of multiple sources and that are intended to be things that LLMs don’t handle well.
“We will have to drive more person-to-person,” says the secondary school teacher, citing an example seen on social media of a teacher who gave students a practice exam and time for them to read it together and discuss it before setting them to work on it. “There are implications for workload. But if you can do a lot of routine homework as automated and checked, then you can focus on the meat in the classroom. It makes it a more important place.”
Illustrations: “The Schoolroom”, by Henry Raleigh (from the Smithsonian American Art Museum).
Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.