GLOBAL
bookmark

Universities must help counter the growing threat of AI extremism

AI researchers are misjudging the threat of AI extremism, a recent report has warned. There is an urgent need for governments, academia and the private sector to develop collective guidelines around open source AI models to prevent them from falling into the hands of extremists.

Further, “as a matter of critical security, governments, the private sector and academia need to agree on rules restricting not only the availability of results from biomedical models that have potential dual-use capabilities, but also the information available on the researchers who created these models and who could be blackmailed (with or without AI)”, advises the report.

The report is written by Stephane Baele, professor of international relations at UCLouvain in Belgium and honorary associate professor of security and political violence at the University of Exeter in the United Kingdom; and Lewys Brace, a senior lecturer and co-director of the Centre for Computational Social Science at Exeter.

AI Extremism: Technologies, tactics, actors was published last month by VOX-Pol, a global research network on online extremism, and was accompanied by an online discussion with the authors.

Another sign of growing concern over AI misuse came on 18 June with the publication of a UNESCO report, in partnership with the World Jewish Congress, warning that generative AI could distort the historical record of the Holocaust and fuel antisemitism, “unless decisive action is taken to integrate ethical principles”.

AI and the Holocaust: Rewriting history points out that generative AI can enable malicious actors to spread disinformation and hate narratives, and can inadvertently invent misleading information. UNESCO called on education systems to equip learners with digital literacy and critical thinking skills, and a sound understanding of this genocide. Similar challenges may be applied to all history.

Dual-use technology usually refers to technology that can be used for both civilian and military applications. On 12 June, the United States Department of Defense announced a Cyber Academic Engagement Office. Among other things it will establish requirements, policies and procedures for data collection for academic engagement programmes.

AI Extremism: Technologies, tactics, actors

Over the past decade, argues AI Extremism: Technologies, tactics, actors, two major phenomena have developed in the digital realm.

“On the one hand, extremism has grown massively on the internet, with sprawling online ecosystems hosting a wide range of radical subcultures and communities associated with both ‘stochastic terrorism’ and the ‘mainstreaming of extremism’,” write Baele and Brace

Stochastic terrorism involves the use of mass media to provoke random acts of ideologically motivated violence, according to a Max Planck Institute project.

The AI report continues: “On the other hand, artificial intelligence has undergone exponential improvement: from ChatGPT to video deepfakes, from autonomous vehicles to face-recognition CCTV systems, an array of AI technologies has abruptly entered our daily lives.”

AI extremism is “the toxic encounter of these two evolutions – each worrying in its own right” – and AI is already being deployed in a variety of ways to bolster extremist agendas.

AI models and extremism

Baele and Brace developed typologies and concepts to organise their understanding of AI extremism. Their analysis focused on AI models, which are, essentially, the outputs of an algorithm that has been applied to a dataset. AI models involve machine-learning algorithms that enable computers to ‘learn’ a task, the report explains.

The authors explored three types of models: content generation models, called generative AI, whose main aim is to produce content; decision-making models, which take strategic decisions autonomously in complex environments; and pattern-recognition models that recognise new instances of items on the basis of patterns identified in training datasets.

“All three types of models have benefited from massive investment by major tech companies, such as Microsoft or Google, and AI spinoffs financially backed by wealthy investors, such as OpenAI,” the authors write. Private investment in AI totalled about US$8 billion worldwide in 2013, rising to around US$60 billion in 2019 and more than double that two years later.

“Turbocharged by such a hefty influx of money, AI models have become more and more powerful, trained on increasingly large datasets and resting on ever vaster computing architectures,” the report states.

In terms of investment, the US currently dwarfs all other states, followed by China. University World News asked Cathrine Lagerberg – co-founder and partner of Crown Defenze AS, and a technical and strategic risk and security expert in Norway – whether she is concerned that China might leap ahead of America in AI technology in future.

“China is seeking to be frontrunning within all emerging technologies, and to be self-reliant and independent from the United States and the West in any advanced technology. Besides all AI’s tremendously positive possibilities, we have only relatively recently witnessed how much damage AI can cause through the use of AI tailored disinformation, AI generated pictures, AI deepfake videos and similar,” said Lagerberg.

“With the investments that China is making in emerging and advanced technologies, and the efforts they put into both domestic research and research and academic collaborations abroad, it is obvious that China also seeks to take the lead within AI – as soon as possible.

“But again, AI depends on enormous amounts of computing power, which depends on high end microchips. Since the US is currently denying China through export control, this can affect Chinese AI innovation, strategy and ambitions,” she explained.

The capacity for harm

All three types of AI models have achieved very high levels of performance and unlocked new discoveries and opportunities. “While the hype often inflates its actual capabilities, there is no doubt that a major technology with enormous potential is beginning to be deployed.

“Yet these three sorts of models have also, simultaneously, triggered grave concerns about the ‘dark side’ of AI, and have already created serious problems. This Janus-faced nature of the technology is now acknowledged at the highest level,” write Baele and Brace.

This is evidenced by the November 2023 Bletchley Declaration held in the United Kingdom, signed by 27 states plus the European Union. Also, the EU AI Act, which creates a common regulatory and legal framework for AI in Europe, was approved by the EU Council on 21 May 2024.

Earlier this year the World Economic Forum included “adverse outcomes of AI technology” among the 10 most severe global risks of the next 10 years.

All nations should be able to leverage the opportunities of technology, and AI is here to stay, Lagerberg told University World News. “But as with all dual-use and emerging technologies there is a disruptive and destructive side to it if not applied correctly.

“People increasingly experience a world surrounded by distrust and it is more and more difficult to distinguish what is true or not. Uncertainty is the root of most people’s anxiety, depression and anger – even kids are taught not to trust what they read and see,” said Lagerberg.

“Mis- and disinformation is nothing new, but the way that AI technology enables disruptive patterns should have most alarms ringing. Since the introduction of smartphones and social media, depression and loneliness have risen.

“We can just imagine what consequences deepfakes and living in a world of constant uncertainty will cause to people’s mentality and sanity a few years from now.

“AI is by nature not a disruptive tool, it is about the way it is applied and which platform it is leveraged on. If you can control AI, you can highly likely control the output from the AI based bots and search engines,” she explained.

AI poses challenges for universities

How much of a threat is the dark side of AI to universities and how can they prepare their researchers for misuse of their research? University World News asked Professor Gunnar Bovim, former rector at the Norwegian University of Science and Technology, a board member of the Norwegian Defence Research Institute and chair of a governmental working group on research security (In Norwegian).

“AI possesses many opportunities for universities. It changes the content and processes of education, research and innovation, accelerates discoveries, gives new partnerships, improves accuracy and opens new avenues in fields like medicine, climate science and engineering.

“AI also poses challenges to universities and other research institutions, due to the pace and widespread availability of AI, ethical violations, data quality, data privacy issues, transparency in algorithms, and fake news, deepfakes or autonomous weapons,” he said.

Bovim added that to prepare researchers, universities must establish guidelines for responsible AI research. They ought to foster a culture of ethical awareness and integrate ethical training and AI ethics courses into curricula.

“The research council can play an important role in this through implementing thorough review processes for projects using or developing AI to ensure trustfulness and compliance with ethical standards.

“Providing researchers with insight, resources and workshops on potential misuse scenarios enhances their awareness and preparedness.

“By fostering an environment of ethical responsibility and vigilance, universities can mitigate the risks associated with the dark side of AI and safeguard their research integrity,” he said.

Some conclusions

Like every new technological breakthrough in the past, “AI also unlocks new paths to harm”, write Baele and Brace. In conclusion, the report offers four suggestions and one question.

First, social media platforms should intensify the development of synthetic content-detection tools and embed them in posting architecture.

Second, governments ought to encourage major AI services to develop more robust safeguards. While safeguards can be breached, this would become more difficult and would make ‘one click’ AI extremism harder to achieve.

“More broadly, states and international organisations with teeth, such as the European Union, should accelerate the creation of an international regime for AI global governance,” write Baele and Brace.

Third, governments, academia and the private sector ought to think hard about open source AI information – models, training datasets etcetera. The benefits versus risks of public access need to be thoroughly considered.

“This is a systemic issue that cannot be left solely to the goodwill and vision of individual project leads: because AI researchers plainly appear to misjudge and dismiss the threat of AI extremism, collective guidelines need to be agreed and imposed to prevent dual-use models from falling into the public domain with all their detailed information,” write Baele and Brace.

The fourth suggestion is that academia and other AI stakeholders need to agree on rules restricting availability of results from biomedical models with potential dual-use capabilities, as well as the identities of the researchers involved.

In 2022 the Nature Machine Intelligence journal called for universities and research centres to “restrict access to data and models, while allowing researchers to submit a request for access”.

The report argues that even though this type of knowledge will eventually spread to states who will make it available to terrorist organisations, “a common modus operandi remains urgently necessary to delay the problem – as in nuclear research”.

Finally, the political question: “In a new age of mounting geopolitical tensions, where malicious uses of AI place liberal democracies at an asymmetrical disadvantage to authoritarian states eager to destabilise them,” write Baele and Brace, “the information war is lost if no forward-looking strategy is devised that brings the battle into rival territories.”

AI “should be innovatively harnessed by security and intelligence services in offensive ways to counter domestic extremist spaces, individuals, and dynamics, and to wage covert large-scale information operations against their foreign patrons. Not doing so will transform liberal democracies into victims of repeated AI bullying”, the report concludes.