A step towards building greater trust in AI in education
Today, the users – the parents, the students, the instructors – have no sense of whether a tool is safe and effective, and they tend to be afraid of most AI-enabled tools.
To address the problem, Riiid, which specialises in AI-powered education solutions, and DXtera Institute, a non-profit membership organisation that uses technology to lower barriers in education delivery, have formed a cross-sector alliance of companies, non-profit organisations and education technology associations to work on an AI in Education benchmark initiative.
The initiative, launched in August, is focused on establishing benchmarks and standards in four critical categories – Safety (security, privacy), Accountability (defining stakeholder responsibilities), Fairness (equity, ethics and lack of bias), and Efficacy (quantified improved learning outcomes). In a word, SAFE educational AI.
DXtera, a trusted non-profit player, is managing the day-to-day work of the alliance. They will be the fiscal agent and the contracting agent to hire staff and experts. In the long run the alliance intends to become self-supporting through membership dues, sponsorships and philanthropic support. Riiid, which has funded the foundation of the initiative, continues to play an active role in recruiting outside organisations as new members.
In the three months since the alliance was launched, it has grown from 20 members to more than 100 members, representing 15 countries. Organisations involved include Carnegie Learning, ETS, GSV Ventures, the German Alliance for Education, EduCloud Alliance and Digital Promise.
The alliance has also aligned itself with UNESCO’s Broadband Commission for Sustainable Development, whose goal is to connect everyone in the world to the internet.
The alliance expects to eventually hire paid experts to develop standards that could be tested and certified. It won’t be working in a vacuum, nor developing standards from scratch.
Underwriters Laboratory, the private certification company, is a member of the alliance and has independently developed a kind of rubric that they use for inspecting algorithms. UL, as it is known today, has participated in the safety analysis of many new technologies since it was founded in 1894.
Nearly every American product that uses electricity has the UL logo on it, which means that it has undergone rigorous testing to meet various standards.
The alliance intends to do something similar for AI education tools and platforms, eventually implementing a voluntary review process for such products that would give consumers confidence in the way that nutritional labels do on packaged food products today.
Stringent testing may also help determine whether products meet existing data privacy laws, such as General Data Protection Regulation guidelines in the European Union and data privacy laws in California.
The alliance hopes that school districts and other organisations would then use alliance certification to guide their purchase of AI-enabled education technology.
The alliance isn’t focused on the United States market alone. It is engaged with people in Israel, Russia and the the European Edtech Alliance, which represents all the EU countries and Education Alliance Finland, among others. The German Alliance for Education brings to the table representatives from about 100 groups, ranging from education ministries and companies to universities and schools.
AI has the potential to transform education, relieving teachers of administrative burdens and personalising learning paths for students. But in order to realise that potential, we need recognised standards that everyone can trust. We’re calling on professionals from all levels of the education industry, from educational delivery agents, from users and from governments to get involved.
Jim Larimore is chief officer for equity in learning at Riiid Labs.