Create critical awareness of AI apartheid – Experts

There are growing concerns that artificial intelligence (AI) is amplifying existing inequalities and divisions in society, creating more polarisation.

This sobering warning was issued by Dr Danielle Becker, a visual studies lecturer at Red & Yellow Creative School of Business, during a presentation at the third annual academic summit of the Honoris United Universities group, held in Cape Town recently.

“As a form of discourse, digital content is subject to existing power relations and ideologies in society. When that content is used or created by AI applications, it can result in the same biases present in the data on which their algorithms were trained in the first place,” she said.

This explains why “techno-racism” has become a prominent part of the discussion, she added, citing an article on the CNN website, ‘People of colour have a new enemy’.

In it, journalist Faith Karimi traces the origins of the term, “techno-racism”, to its use by a member of a civilian police commission in the United States city of Detroit in 2019 to describe “glitchy facial recognition systems that confused black faces”.

One of the many examples of this phenomenon that have been making headlines the past few years is the arrest of Robert Williams outside his suburban home in Detroit based on a bad facial recognition match. He spent 30 hours in jail before his name was cleared.

“I never thought I’d have to explain to my daughters why Daddy got arrested,” Williams subsequently wrote in a column for The Washington Post. “How does one explain to two little girls that a computer got it wrong, but the police listened to it, anyway?”

Machines are not neutral

In an article for TIME, Joy Buolamwini, a computer scientist with the MIT Media Lab and founder of the Algorithmic Justice League, speaks of “the coded gaze, a bias in AI that can lead to discriminatory or exclusionary practices”.

She writes: “We often assume machines are neutral, but they aren’t,” adding that her research uncovered large gender and racial bias in AI systems sold by tech giants.

“Given the task of guessing the gender of a face, all companies performed substantially better on male faces than female faces. The companies I evaluated had error rates of no more than 1% for lighter-skinned men. For darker-skinned women, the errors soared to 35%.

“AI systems from leading companies failed to correctly classify the faces of Oprah Winfrey, Michelle Obama and Serena Williams. When technology denigrates even these iconic women, it is time to re-examine how these systems are built and who they truly serve.”

Buolamwini wrote this in 2019, so one might think that facial recognition would have improved in the meantime but, in her 2021 piece, Karimi referred to a study by the US National Institute of Standards and Technology of over 100 facial recognition algorithms, which found that “they falsely identified African American and Asian faces 10 to 100 times more than Caucasian faces”.

ChatGPT also biased

Moreover, similar concerns have been raised about ChatGPT since the November 2022 release of the AI application that uses natural language processing to create humanlike conversational dialogue.

In a preprint paper submitted to Machine Learning with Applications in April 2023, Professor Emilio Ferrara of the University of Southern California pointed out that “large language models, which are commonly trained from vast amounts of text data present on the internet, inevitably absorb the biases present in such data sources”.

Buolamwini makes no bones about the negative consequences of this phenomenon: “The under-sampling of women and people of colour in the data that shapes AI has led to the creation of technology that is optimised for a small portion of the world.

Not for Africa

In an online contribution, Emsie Erastus, a Digital Rights and Inclusion Media Fellow of the Paradigm Initiative, asks: “What happens when machines are not given enough data to learn and accurately represent those on the African continent?”

She goes on to list a range of negative consequences – from recruitment processes and healthcare to assessing people’s creditworthiness. Other commentators have pointed to biases in risk assessment tools, for instance home loan algorithms.

Karimi writes: “When technology reflects biases in the real world, it leads to discrimination and unequal treatment in all areas of life. That includes employment, home ownership and criminal justice, among others.”

Erastus concurs: “Questions of representation are central in [the] data and algorithm ethics discourse, and rightfully so, because machines mirror society’s behaviour. With such arguments at the fore, it becomes clear that previously marginalised communities are more inclined to experience algorithmic biases.

“Africans continue to face discrimination offline in various sectors and should machines continue to consume and process biased data, eliminating existing inequalities could become an even harder challenge. Therefore, the time to start dissecting data and algorithm biases in Africa is now.”

Erastus adds: “Those of us on the African continent are not just users. Our data is providing big tech companies with insights into our behavioural patterns. If such data is not inclusive and just, current algorithmic structures could reinforce AI apartheid systems.”


In an essay published on the blogging platform Medium, data scientist Nathan Begbie asks: “Given the potential harm machine learning can cause, how can South African organisations mitigate against problematic algorithmic bias in their data and models?”

He goes on to list several steps, including increasing scrutiny of the data and the processes used to generate models using the data through auditing, and filling the gaps in the available data.

Buolamwini argues the case for having broader representation in the design, development, deployment and governance of AI.

And Ferrara argues that bias can be mitigated with an approach called “human-in-the-loop”. He says people should be included in the curation of training data and the fine-tuning of large language models, and they should provide evaluation and feedback. Humans should also do real-time moderation, and lastly, users – ie humans – can be provided with options to customise the model’s behaviour, adjusting the output according to their preferences or requirements.

What is HE’s responsibility?

In her presentation, Becker argued that researchers and lecturers in higher education have a special responsibility to raise awareness of digital representation, including techno-racism.

“Critical digital literacy is vital, which goes beyond democratising access. Do users have the critical resources to know what to do with that content?

“It is up to us to guide our students, to empower them to understand what they are looking at. So that they ask themselves, What do I do with all this information? How do I know what information to look at, what information to believe in the face of things like fake news and AI biases?”

According to Karimi, Christiaan van Veen, the former director of the Digital Welfare State and Human Rights Project, which was established at New York University School of Law to research digitalisation’s impact on the human rights of marginalised groups, said it is time to be more sceptical about Silicon Valley and the supposed benefits of technology.

He said: “Like with other expressions of racism, the fight against techno-racism will need to be multipronged and will likely never end.”