UNITED STATES

University librarians are divided over AI use and ethics – Survey

A survey of 125 university librarians across the United States has discovered wildly differing opinions on the use and morality of artificial intelligence (AI) tools such as ChatGPT in higher education. Only 13% of surveyed academic libraries offer AI products to researchers, and 24% are considering this.

Encouragingly, half of surveyed librarians do not believe that students who use AI products are cheating, against 8% who believe that they are. And some 86% of librarians said AI use by professors for research is ethical or ‘somewhat’ ethical, while 12% believe it is unethical.

“Major concerns regarding AI in higher education include cheating, eliminating or reducing critical thinking and originality, and replacing human jobs,” said Helper Systems, a software development company that conducted the survey. The survey report, “AI in Higher Education: The librarians’ perspectives”, was published this week on Monday 13 March.

The following day, OpenAI released GPT-4, the latest version of its ChatGPT bot that has taken the world by storm. OpenAI said the new model can respond to images, and can process up to 25,000 words, among other advances. GPT-4 is currently only available to ChatGPT Plus paying subscribers.

Helper Systems, a company of library and publishing industry experts that develops software to improve how information is discovered and used, wanted to find out the views of academic librarians on the use of AI in universities. Based in the town of Helper in Utah, it has employees in California, Ukraine and Serbia.

Librarians, explained Helper Systems founder and CEO Christopher Warnock, play a pivotal role in higher education and student success, and are key to the identification and adoption of innovative new technologies.

“There is no doubt that AI in higher education is here to stay,” Warnock told University World News. “Therefore, librarians, professors, publishers, vendors and others need to work together to ensure students gain the benefits of AI products while using them ethically and responsibly, in a manner that does not impede critical thinking or originality.”

Survey and background

Helper Systems developed a survey using SurveyMonkey, which was delivered to libraries via email. The survey took place in February.

More than 125 academic librarians across the US responded, from institutions ranging from New York Tech, Texas Tech University and Kent State University Libraries to the universities of Alaska, California, Michigan and New Mexico. This was a fraction of the librarians sent the survey, and the company hopes to expand the reach in future surveys.

The views of the librarians were diverse, fluctuating from highly negative to enthusiastic to accepting, with the latter exemplified by this: “Once the genie is out of the bottle, you can’t put it back in, so you just have to find a way to grapple with the new reality.” And along similar lines: “AI will gradually become ubiquitous – it’s too powerful a tool to ignore.”

Not impressed is an understatement for this librarian: “Students learn how to punch a calculator for math. Now they learn how to run ChatGPT to write a paper. They use RefWorks to create citations. We are educating intelligent youngsters towards dummies.”

More positively, another librarian said AI products “are a potential game-changer in the way that the introduction of Google changed the research process. Too many libraries missed the boat in using Google, opposing it rather than endorsing and utilising it. I do not get the impression that is occurring with the new AI resources.”

And, presciently from another academic librarian: “I think it’s the next paradigm shift.” Microsoft co-founder Bill Gates also thinks so: in February he described AI developments, such as ChatGPT, as “every bit as important as the PC, as the internet”.

Helper Systems points out that AI technologies have been around for decades. The earliest ‘successful’ AI programme, which played the game of draughts (checkers), was written in 1951 by Christopher Strachey at the University of Manchester in the United Kingdom. In the 1970s, AI innovations in the US entered the education and health markets.

Growth in the AI market in the new millennium has been exponential, said the report: “From 2008 to 2017, venture capital firms around the globe invested more than US$1 trillion into AI-based education.”

The release last November of ChatGPT created global waves and drew public attention to generative AI and its growing array of extraordinary tools. In higher education, said the report: “Educators are challenged with navigating this new landscape and determining the ethics and long-term benefits and repercussions of students who use AI programmes.”

Some key findings

The survey provides narrative from librarians, reflecting their views. It also offers some nuts and bolts statistics, with implications for academics, students and universities.

A surprising nearly two thirds of academic libraries do not offer any AI products to researchers. As mentioned, a small proportion – 13% – currently offer AI tools and another 24% are considering doing this.

Among the reasons could be affordability. The report quotes one participant as saying: “There would have to be an extraordinarily compelling case for an AI for us to consider [allocating budget] when we are struggling to maintain even fundamental academic databases.”

According to the report: “None of the librarians who participated in the survey are allocating budget dollars to AI and just 9% are considering earmarking funds.”

Looking into the ethics of AI use, the report said that 8% of librarians stated a definitive ‘yes’ when asked if they believe it is cheating if students use AI products for research, while 42% felt this to be somewhat true.

The ‘somewhat’ answer appeared to be related to context – some uses would be ethically problematic, such as passing ChatGPT work off as their own, while a student using AI to generate results that they interpret would not be an issue.

Further, one participant said: “Research methodologies and purposes vary widely depending on discipline. What may be appropriate and even productive in biological research may not be in the social sciences, for example.”

Half of the survey participants said it is not cheating if students use AI products and services. “Some of us are old enough to remember that spellchecker was going to replace editors and Google was going to replace libraries. AI is now going to save time for researchers who are already using spellchecker and Google technology for their work,” said one librarian.

Interestingly, said the report: “While 8% of surveyed librarians believe it is ‘cheating’ if students use AI products, 12% said it is unethical for professors to use them for research and 14% indicated it is unethical for professionals to use them on the job.” Equal proportions of 43% of participants said it was ethical or somewhat ethical for professors to use AI.

Said one librarian: “As long as a researcher is transparent about using AI at a particular stage of a research process and there is no harm to research subjects or bias, it is ethical.”

Four in five of the librarians surveyed did not believe it should be mandatory for students to use AI, although many felt it was important for students and academics to learn to use technologies that would be prominent in future. Some described researchers as using AI to perform tedious tasks, freeing up time for detailed research.

AI, said one librarian, has “the ability to replace manual, librarian-led literature searching and systematic reviews. I see AI as a faster, more efficient solution to what is now a very hands-on, time-consuming process. I see real promise here”.