EGYPT

Study calls for charter for the ethical use of AI in HE

Universities must put in place artificial intelligence (AI) ethical charters that academic communities must adhere to when dealing with AI applications in various aspects of the educational process, including research, as well as teaching and learning.

This was the main message that emerged from a study, ‘Artificial intelligence phobia and scientific research ethics’, published in the July issue of the International Journal of Research in Educational Sciences. Experts agree with this sentiment.

The author of the study, Professor Mehany Ghanaiem of the faculty of education at Mansoura University, Egypt, said there is fear of the “rapid progress of AI” and how it will affect research.

Ghanaiem said that, based on a 2021 UNESCO recommendation on the ethics of AI, universities must develop policies for students and faculty members that support the ethical use of artificial intelligence – for example, “an ethical charter… that everyone adheres to”.

To put the ethical charter into practice, Ghanaiem said that “Egyptian universities, in cooperation with the Supreme Council of Universities, must adopt a charter of honour for academic integrity, and its provisions included in the articles of Egyptian universities law.

“The Egyptian Ministry of Higher Education must consider that the ‘Code of Honour of Academic Integrity in Universities’ is the criterion through which those who transgress it and deviate from its provisions are held accountable,” Ghanaiem said.

Charter for ethical AI

Professor Hamed Ead, who is based in the faculty of science at Cairo University and is the former cultural counsellor at the Egyptian Embassy in Morocco, told University World News: “There must be an ethical charter for the use of AI technologies in any university.”

According to him, a charter is necessary to ensure that the university community, faculty members, students and administrators, are aware of the ethical implications when using it [AI] and that the university is able to make informed decisions about its use and interact with it in a responsible and ethical manner – and that this is done with appropriate training and education.

“In order to address new ethical concerns, it is also necessary to make sure that the charter is periodically reviewed and updated.

“In addition … to promote the development and application of AI in a way that benefits society as a whole, it is important to build effective regulatory frameworks, industry standards and public policies,” Ead noted.

Also, added Ead, a charter for the ethical use of AI can promote transparency and accountability in the development and use of AI so that it becomes understandable and interpretable.

“An ethical charter can also help protect privacy and data in addition to addressing issues of bias and discrimination and helping to encourage innovation,” Ead pointed out.

Professor Ahmed Attia, the head of faculty affairs in the faculty of medical technology at the University of Tripoli in Libya, told University World News that he concurred that an AI ethical charter was important.

The application of a charter could detect and mitigate unfair biases based on race, gender, nationality and other factors. As far as privacy and security go: AI systems prioritise data security. Data governance and management systems are provided by ethical AI-designed systems, Attia added.

“We can stop the AI phobia by ensuring that each student educates him- or herself about AI. By learning about the workings of AI – its limitations, and its potential applications – you can make more informed decisions about how to use and interact with AI technologies,” Attia noted.

The charter is not enough

Professor Sami Hammami, the former vice-president of the University of Sfax in Tunisia, told University World News: “I think it is imperative to develop a charter on the use of AI in higher education.

“[A] charter is an interesting idea but requires a commitment on the part of all stakeholders: the university, teachers, administration and students,” Hammami added, pointing out that applications such as ChatGPT can be abused.

“Several teachers are currently reflecting on the problems of plagiarism that can appear in the work of PhDs and certain scientific publications and the way in which one can detect artificial content … supposed to be [work] carried out by the student or the researcher,” Hammami pointed out.

“It seems that we are currently not yet equipped to do so,” he warned.

“Other teachers believe that we cannot stop the development [of AI] and take advantage of it by directing its use towards the development of humanity and [by] creating safeguards to protect science from its excesses,” he said.

“Indeed, AI can improve the quality of our lessons, deepen our knowledge and provide new answers to questions that have remained unanswered. The contribution of AI can also be useful by sometimes offering new directions from a multidisciplinary approach,” Hammami pointed out.

However, he said, a charter may remain insufficient. What may be required is to introduce ethical values into basic teaching.

Social acceptance of guided AI

Professor Ahmed El-Gohary, the founding president of the Egypt-Japan University of Science and Technology, or E-JUST, in Alexandria, told University World News: “The recently applied AI in higher education initiated waves of turbulence.

“Many concerns are clouding the academic environment due to the emergence of ChatGPT and how these new inventions can affect the assessment of students and overestimate their capacities.

“Some academics are hesitant to support AI in the learning process in higher education, while others are comfortable with this application to help create smarter students, provided that there are clearly announced guidelines in place for AI applications,” El-Gohary added.

“I do believe that, at the end of the day, applying guided AI will be more beneficial for the students’ learning path. However … guidelines are mandatory,” he said.

He said the challenge is to determine whether this should be the responsibility of individual institutions, or whether it [the development of the guidelines] should come from national or regional level that can be discussed at a dedicated conference.

“Any proposed guidelines for the application of AI, especially the big ethical component of it, should be shared and reviewed by society in large,” El-Gohary noted.

“The way to go is to widely explore the capacities of AI in higher education; to share this information; to discuss thoroughly what is permissible or acceptable or tolerable and what is not; to come up with a plan for strict implementation and to assert the penalties for ‘illicit’ usages,” El-Gohary suggested.

“Monitoring illegal applications [also] needs efficient software,” El-Gohary noted.