In an age of AI, understanding the value of a human is key

Educators will need more than ever to understand the unique value of a human and to perceive large language models as legitimate and useful sources of ideas rather than shortcuts and avenues for cheating, according to tech-focused educational entrepreneur and author Priten Shah.

Shah told University World News that the rapid development of artificial intelligence should prompt a re-evaluation of education’s ‘three Rs’ – away from reading, writing and arithmetic and towards “rhetoric, relationships and reasoning”.

This is because AI will undertake so much textual analysis and writing in future, that education will need to ask: “Where is the unique value of a human?” said the Harvard University education policy and management MEd graduate.

In a forthcoming book*, AI and the Future of Education: Teaching in the age of artificial intelligence, Shah defines that unique value as the capacity for emotional connection with other humans.

A change in educational outputs

He argues that higher education output will need to veer away from term essays and towards real-time classroom discussions – seminars, case study presentations and role-playing, which AI cannot fake, and for which AI can legitimately help students prepare. The same applies to invigilated written examinations, where there is no computer access.

That will head off the tough task of assessing whether an essay has been part-written by large language model (LLM) chat-based AI, such as ChatGPT, Microsoft’s Bing and Google’s Bard. Tech systems are unreliable in picking out LLM-based writing and, with a new crop of students about to start courses, AI plagiarism monitoring will get tougher for teachers, he told University World News.

“In the last six months the number of students using AI systems to cheat on their assignments has been pretty high,” said Shah. “Last year, it was easier. You had students who were halfway through the semester before they realised that ChatGPT was a thing and you’ve already seen some of their real writing. But this August, we have never seen these students’ writing before so you might be getting ChatGPT for the first essay and you have no context to say this is different….”

Open discussions with students

Shah, who also leads Pedagogy.Cloud, which combines education with new tech; and United 4 Social Change, which provides accessible interdisciplinary civics education to students of all ages and backgrounds, said lecturers and professors are finding it tough to talk to students about AI at all because they worry about getting into trouble for using LLM systems.

Far better to shift course structure and assessment to a system where ChatGPT and other LLMs are legitimate and useful sources of ideas rather than shortcuts and avenues for cheating, he said.

“Classrooms will have to be way more lively than they were before,” said Shah. Rather than drafting an essay, they may be asked to present a case study, being told to research the topic in any way, including via AI. The same applies for role-play preparation – pretending to be on a board of education, a policy maker, or in a business boardroom: “You're having to use evidence from what you've learned. You’re having to think critically ... so much more enjoyable for students [than writing essays].”

Moreover, by removing the AI-cheating risk latent in essays, professors and lecturers can then have an honest discussion with students about how they want and expect LLM AI to be used, and not used.

“We need to have these conversations actively with the students. One reason they don't talk to their professors is because this is only seen as a cheating tool. We need to have a conversation saying, ‘These are the legitimate uses in my classroom. You need to tell me if it’s used and here's what I expect you to do on your own. Right now, those conversations are not happening at the levels they need to happen.”

Rather than a student sweating in their room at 3am over an essay deadline, and then abusing ChatGPT, they might talk to ChatGPT about a book for the next day’s seminar, asking what are the 10 questions that fellow students might ask and how these might be answered: “You have to ingrain that information and act on it,” said Shah.

Flexible AI policies

Shah said professors and lecturers should be left to devise AI policies themselves, rather than following university and college-wide rules.

“My worry is if it becomes institutional, there could be a loss of flexibility that different professors might need in the way they are structuring their assignments, in particular semesters or class settings, from [say] a seminar to an introductory course with 150 students,” he said. Having grassroots rules will enable “students and professors to hold each other accountable”, he added.

It would also enable academic staff to take account of the quickly evolving abilities of chat-based AI systems, which continue to become more powerful, knowledgeable and accurate.

That will lead to a reassessment of teaching techniques and pedagogical goals, which need to focus on how AI can help students develop, rather than hinder their learning through cheating. During basic research, that can offer students the opportunity to ask a follow-up question to an answer, through an online conversation: “That’s a whole new process [compared with] opening up links and doing the synthesis,” as with traditional search engines, he said.

Shah said in education, where such thinking is a tool for education development – rather than in the professional world, where it might just save time and money – such AI interactions may not be positive, which, again, is why open conversations are needed about how LLM AI should be used by students.

His argument is that the goal of writing essays is not producing the papers themselves, but developing the thinking that goes into the writing.

“There's going to be a re-framing for students … explaining … why the process of learning is so important,” with professors giving credit for a first and second draft, and how improvements have been made, “to incentivise this whole process, step-by-step”, he suggested.

Should that happen, students might be better prepared for the new world of AI, where many jobs of today will disappear.

A return to liberal arts principles

Shah’s book addresses this issue, among others, and suggests that higher education may need to focus less on careers than it does today, perhaps returning to the liberal arts education principles of the past.

“When you look at the 1900s and the 1800s and public education’s purpose, there’s very little in there about making sure you’re prepared to go into the workforce. Its fundamental purpose was to prepare you for liberal society, so you can have conversations to interact with fellow members of that democratic society … and talk about the current events and think about things – not about what you're putting into the economy. A lot of that output might be taken over by AI systems.”

Shah’s book is designed to help higher education managers and other educational policymakers face up to the challenges of chat-based AI.

“A big part of the university experience is becoming an adult, becoming a human, and working out what that means for yourself and your identity, what relationships matter, what you care about in the world: that’s a huge part about what a university education is supposed to do,” he said.

And with AI likely to become more important as it continues to strengthen, this issue will not go away, stressed Shah.

“For college freshmen who enter in September – the careers open to them in four years will be so different from what they prepare for right now. If you spend the next four years of your undergrad course to get the highest grade to get this amazing job that you’re passionate to have after college, you're going to have wasted your four years,” said Shah.

Some university presidents might be revisiting the purpose of their institution as a result. “We have to look at all the school mottoes and all the mission statements,” he said.

* AI and the Future of Education: Teaching in the age of artificial intelligence (September 2023) by Priten Shah addresses topics such as: understanding AI and machine learning and learning about new developments, such as ChatGPT; discovering strategies for engaging students more fully using AI; automating administrative tasks, grading and feedback, and assessments; using AI in innovative ways to promote higher-order thinking skills; and examining ethical considerations of AI, including the achievement gap, privacy concerns, and bias.