NEW ZEALAND

Independent watchdog key to monitor artificial intelligence

Nations that increasingly use artificial intelligence (AI) devices to assist in decision-making should act immediately and adopt ‘an independent watchdog’ to monitor them for possible risks to the public, according to two senior academics in New Zealand.

John Zerilli and Colin Gavaghan have called on their government to establish an independent regulator to monitor “and address the risks associated with these digital technologies”.

“To protect us from the risks of advanced artificial intelligence, we need to act now,” say the two Otago University academics.

“The public should know what AI systems their government uses as well as how well they perform. Systems should be regularly evaluated and summary results made available to the public in a systematic format.”

New Zealand is part of a global network of countries known as the ‘Digital 9’ that use predictive algorithms in government decision-making. As well as New Zealand, these countries include Britain, Canada, Estonia, Israel, Mexico, Portugal, South Korea and Uruguay.

In a 92-page report on government adoption of AI in New Zealand, Zerilli and Gavaghan say that using AI in assisting government actions can range from optimal scheduling of public hospital beds and the efficient processing of simple insurance claims, to whether an offender should be released from prison based on their likelihood of reoffending.

But while AI can “enhance the accuracy, efficiency and fairness of day-to-day decision-making”, they say concerns have also been raised regarding transparency, meaningful human control, data protection and bias.

Transparency issues

There are three important issues regarding transparency.

One relates to the ‘inspectability’ of algorithms so that people can understand how the system operates. “Unlike some countries that use commercial AI products, New Zealand has tended to build government AI tools in-house. This means that we know how the tools work,” the researchers say.

‘Intelligibility’ is another issue. Knowing how an AI system works does not guarantee the decisions it reaches will be understood by the people affected. The best performing AI systems are often extremely complex.

To make explanations intelligible, additional technology is required. A decision-making system can be supplemented with an “explanation system”.

These supplements are additional algorithms “bolted on” to the main algorithm that citizens seek to understand. Their job is to construct simpler models of how the underlying algorithms work – simple enough to be understandable to people, Zerilli and Gavaghan say.

“We believe explanation systems will be increasingly important as AI technology advances.”

A final type of transparency relates to public access to information about AI systems used in government.

The researchers argue that people should know what AI systems their government is using, as well as how they perform. Systems should be regularly evaluated and results should be made available to the public.

AI and the law

In their report on the research, Zerilli and Gavaghan comment on how well New Zealand law currently handles transparency issues. They note that although the nation does not have laws specifically tailored for algorithms, some are relevant.

“For instance, New Zealand’s Official Information Act provides a right to reasons for decisions by official agencies, and this is likely to apply to algorithmic decisions just as much as human ones.

“This is in notable contrast to Australia which does not impose a general duty on public officials to provide reasons for their decisions.”

But even if there were an official information act, the authors say it would come up short where decisions were made or supported by “opaque decision systems”.

“That is why we recommend that predictive algorithms used by government, whether developed commercially or in-house, must feature in a public register, must be publicly available for inspection, and – if necessary – must be supplemented with explanation systems.”

Human control and data protection

Another issue relates to human control. The writers say some of the concerns around algorithmic decision-making are best addressed by making sure there is a “human in the loop”, with a person having the final sign off on any important decision.

“However, we don’t think this is likely to be an adequate solution in the most important cases. A persistent theme of research in industrial psychology is that humans become overly trusting and uncritical of automated systems, especially when those systems are reliable most of the time.”

So just “adding a human” will not always produce better outcomes. In fact, in certain contexts, human collaboration will offer false reassurance, rendering AI-assisted decisions less accurate, the researchers say.

* In its 2018-19 budget last year, the Australian government allocated AU$30 million (US$21 million) to enhance Australia’s efforts in artificial intelligence and machine learning. Part of the grant includes the development of a national AI Ethics Framework, a technology ‘roadmap’ and a set of standards.