UNITED STATES

Has new AI catapulted past singularity into unpredictability?

Listening to people at the ASU+GST summit in the United States, Ethan Mollick, an associate professor at the Wharton School at the University of Pennsylvania, wondered if new AI breakthroughs had already catapulted humanity into a post-singularity era. “There’s a fundamental shift that’s already happened,” he said.

“Everyone worries about an AI singularity, right? This idea that one day AI gets smarter than us, and then everything’s unpredictable past that point. When I hear people talk here, it sounds like singularity has already happened.

“Forget ultra intelligent AI, but there’s not been a single person here who hasn’t said that everything’s different now. People are talking about how to reconstruct the education system from the ground up. Not that it’s broken, but the way we thought it was has just ended.”

Mollick was a speaker on a panel titled “The Future of Integrity in the Brave New World of AI/GPT”. Other speakers were Professor Emma Brunskill of Stanford University, Turnitin CEO Chris Caren, GPTZero founder Edward Tien, and Dr Paul J LeBlanc, president of Southern New Hampshire University. The moderator was Professor Mitchell Stevens, also of Stanford.

Launched in 2010 as a collaboration between Arizona State University (ASU) and Global Silicon Valley (GSV), the annual ASU+GSV summit says it “connects leading minds focused on transforming society and business around learning and work”. This year it was held in San Diego from 17 to 19 April 2023.

“The nature of jobs just changed fundamentally. The nature of how we educate, the nature of how teachers and students relate to work, all that has just changed too. Even if there’s no advancement in AI after today, that’s already happened,” said Mollick, an economic sociologist who studies and teaches innovation and entrepreneurship at Wharton.

“We are seeing, in controlled studies, improvements in performance for people doing job tasks with AI of between 20% and 80%. We’ve never seen numbers like that. The steam engine was 25%.”

Mollick said his life’s work, outside his writings on sociology, has been trying to build democratised education through simulations and games. He has written books on the topic.

“I have worked for years to build these simulations, which I think are awesome. If I type the right two paragraphs into GPT-4, it will run a simulation for me that covers 80% of what I’ve spent two years building. I mean, it’s fundamentally different,” he told the summit.

“The way people learn is not changing. People will continue to seek meaning and will go to university. But almost everything else is changing. There are huge opportunities for us as educators, as ed tech creators, to help people create meaning in this world.”

Education will be fine, said Mollick. There are already numerous techniques that are better than lecture based teaching, such as flipped classrooms, and AI tutors are “amazing” and will become more so.

“We are going to be doing much less lecturing and didactic instruction in class, much more working with AI and having our students work with AI. Integrity is going to be a shifting thing because of how people work with AI. In the end, we’re not going to be able to detect this work. It’s just going to be woven into everything that people do,” he said.

Shifts that are imperative

Edward Tien is the 22-year-old founder of GPTZero, which launched on 1 January 2023 as an AI detection app. “Since then we’ve had over four million users, partnered with 40 edtech organisations, started beta tests and finished beta tests with universities. It’s been crazy,” he said.

Tien said that two years ago he had dropped out of education to work as an investigative journalist at the BBC, reporting on authoritarian regimes. “We were looking at election interference and already in 2020 there were bots on Facebook and Twitter spreading disinformation, with AI generated faces and profiles, and I was writing code to detect those.

“Now imagine those bots can chat. I was like, holy shit, that’s really scary. So I went back to school and spent the last two years doing AI detection research.” He is currently a computer science and journalism student at Princeton University.

Tien stressed two key areas of concern.

First, after the release of ChatGPT, it made sense to put immediate safeguards in place. “But today we really need to shift from detection on the individual level to detection on the policy and institutional level. We just launched a new product to do that.”

For instance, GPTZero beta tests in universities have found differences in academic sciences and other areas of teaching and learning. “In some cases, when AI content in a course is over 20%, it’s really not the student’s fault at that point, the curriculum needs an update.”

Secondly, Tien told the summit: “We need a seismic shift right away, from teacher versus student to teacher for student. We really need to get to the core of what all of this is about. It’s not about catching the student.

“It’s not even about detecting AI. It’s about preserving what’s human in terms of the qualities of learning that a computer can never coopt, whether it’s beautiful prose, or academic thinking, or critical thinking skills – all of the skills we can never have the computer do.”

AI authored content on the rise

Next up was Chris Caren, CEO of Turnitin, who, moderator Mitchell Stevens pointed out, has been thinking about plagiarism for longer than Tien had been on earth.

Turnitin, a company that provides AI writing detection products, was founded 25 years ago and has built a global presence in the higher education and high school markets. In the US, 80% of college students and 50% of high school students use its service.

“Student behaviour is changing incredibly fast,” said Caren. “Preventing misconduct has evolved from copying off the internet to using other students’ work to most recently, prior to AI, contract cheating or ghostwriting, where you find someone online and pay them to write the paper for you. So it’s original, but it’s not yours.”

Turnitin is using AI to help fingerprint writing style to tell when it has changed. Two weeks ago it launched a new app that highlights the percentage of a paper that has been written by AI. “As a data point for a faculty member to understand whether there's a problem or whether it’s appropriate use, depending on their guidelines.”

At peak times, Turnitin processes 50 papers a second from students. “Right now, we’re seeing about 10% of papers with at least 20% AI authorship in the work, and about 5% that are basically completely written by AI. I think most educators would say 20% may be fine, 100% is a problem.”

Compared to four weeks previously, AI-authored content tripled, said Caren, and it will keep growing. “Students are very innovative with new technologies.” Turnitin conducted a survey in US higher education. It found that “30% of students use AI daily in their work. And 70% of educators never even tried it. They have no idea what ChatGPT can do.”

What skills should we stop teaching?

Emma Brunskill is an associate tenured professor of computer science at Stanford University. In her lab, she and colleagues work to create AI systems that learn from samples to robustly make good decisions, for application in healthcare and education.

She spends time wondering why people might not want to use AI in learning. “Well, because productive struggle is part of what it means to learn, because you don’t actually care whether or not someone gets an A or an A minus. It’s trying to formulate ideas constructively, think about them, make arguments and points, that gives people the skills for later things that they want to accomplish.”

Rather than focusing on punitive or evaluation measures for new technology tools, said Brunskill: “We need to make sure that students realise that when they use these, they are losing an opportunity to gain the skills that they will need for the future.

“That brings up a really important question for education going forward, which is, what are the skills we’re going to stop teaching and what are the skills we need to teach now in order to allow people to make use of these technologies in a productive way?”

She used a microwave as an analogy. Very many people use microwaves, but a small number of people know how microwaves work or how to service them. An even smaller number of people can innovate the next microwave.

“But we need people who can create knowledge, who can create new forms of technology. If we want to make sure that humanity can continue to advance the frontiers, we need to make sure we teach the next generation skills to innovate.”

Another big question is what to teach, now that technologies can do many tasks people are trained to do. “Do we forget about two digit addition and we start at calculus? Do we skip calculus? Do we skip coding? What prerequisite structure or curriculum do we need to include so that we’re not just users of microwaves, but can make the next technology?”

Rethink the HE system

Dr Paul LeBlanc has been president of Southern New Hampshire University for 19 years, during which SNHU has grown from 2,800 students to more than 180,000 and is the largest non-profit provider of online higher education in America.

It is also the first to have a full competency-based degree programme untethered to the credit hour or classes approved by a regional accreditor and the US Department of Education.

LeBlanc is pondering how to rethink the whole system of higher education, given the proliferation of AI. “I think about that as three legs of a stool.”

One is how to rethink teaching and learning, and more fundamental questions about knowledge. A second is how to think about student support. LeBlanc believes learning only happens effectively in relationships, but universities put more emphasis on a coaching model. The third leg is operations, where universities are using AI in, for instance, marketing and human resources.

“These three point solutions don’t unlock the full potential of technology, but system redesign does. So we’re putting together a team that will be asked, how do we reinvent a version of SNHU that doesn’t look at all like SNHU – in fact, could even put SNHU out of business. Because I think this will fundamentally allow us to do different things.”

LaBlanc has also spent a lot of time thinking about what now counts for knowledge. So much of higher education is based on epistemology, the theory of knowledge. “I love George Siemens’ thinking about this, which suggests we’re going to have to go from epistemology to ontology and questions of the nature of being – how we are and how we are in the world.

“Interestingly, the departments that do that most in our institutions are religion and philosophy. We’re now grappling with these really critical questions. When knowledge is no longer a scarcity, its value is much less.”

Massive job changes coming

Brunskill said AI would have a transformative impact on job opportunities. “We shouldn’t fool ourselves into thinking that’s necessarily going to be positive for everyone.” She believes the US has a dismal record of retraining people after jobs become outsourced. Faced with job layoffs, universities need ways to help people thrive in the new economy.

Caren said Turnitin has a few hundred engineers. “I think in 18 months, we’ll need 20% of that number of people and we’ll be able to start hiring out of high school versus four-year colleges. Probably the same for a lot of sales and marketing functions.”

LeBlanc, too, believes “huge swaths of jobs are going to go away”, especially low- and middle-level knowledge jobs. And universities are in the knowledge industry.

“The opportunity is that many more people are needed in other parts of the economy. “We should be flooding our school system with talented teachers, and our broken mental health care system with qualified people. We have a broken criminal justice system, we have a broken health care system. We need social workers.

“The problem is that those jobs are not well paid or valued. There is a need to reconfigure what meaningful work looks like, and how people-oriented jobs are paid. “Because to be an accountant, or a programmer – those jobs are going away.”

LeBlanc said universities must prepare for change. Today, 52% of Google searches for higher education are for non-degree programmes. “There’s a huge shift in the market towards micro credentials that are laser focused on skills and jobs – and jobs that won’t be displaced.”

The head of a team at cyber security company Oracle told LeBlanc they had used ChatGPT to do coding for a project that would previously have hired hundreds of hours of contract programmers. “They did it in an hour.”

The work was 90% good, and the Oracle team started fixing it. “Then they thought, ‘Wait a minute, why are we fixing it?’ And they prompted ChatGPT to go in and do the fixes.”

LeBlanc added: “So on the high end, we are going to need to raise the bar on the kinds of cognitive skills that are necessary to lead work and do new creative work and produce new work. Middle- and low-level knowledge work is going away. The human jobs won’t.

“My optimistic sense is that finally, maybe, we will get focused on the human jobs that make our lives better, that take better care of society and bring meaning into our work every day.”

Stevens agreed. The sheer rapidity of developments such as AI, and the enthusiasm it generates, give a sense of urgency to think about new ways to fix the broken parts of societies. “How could we take human capacity together with the technological capacity that is now under development and come up with new ways of addressing very old problems?”

The academic workplace

While the need for learning will continue, the higher education workplace is transforming.

There are endeavours to make an AI scientist, said Brunskill. “It’s realistic to think that many of the things I provide to my grad students, or a lot of the tasks we often do, could be generated by GPT-4 or its successors over the next few years.”

AI is good at enhancing skills in areas that are difficult to train for, she said. For instance, it can develop outstanding simulation exercises that train people to respond well to high stakes situations, such as giving negative feedback to an employee or working in diverse groups.

Mollick talked about AI generating assignments and simulations and testing, which for academics means “you have time to do some of the stuff you couldn’t do before”. Educators should be anxious about some aspects of technological change, but should also explore the potential. “There’s a whole bunch of ways that you can teach that you weren’t able to do before and that are really, really exciting.”

He warned against pronouncements about what AI does or does not do well. Using AI is going to get easier, and AI will be able to do much more than currently.

“The best way to figure out how to deal with the change is to try and automate as much of your own job as you can and see what happens. Hopefully you’re left with the interesting bits and the bad bits go away. A lot of the early research is showing that.

“But that’s the only thing I can tell you to do at this point. There isn’t an authority, there is no guidebook that OpenAI has put out. This has to be a thing that you individually play with.”