Otago University’s artificial intelligence research will help shed light on the effect of AI innovations on law and public policy in New Zealand, including a look at some potentially harmful implications from the powerful and fast-developing technology.The Otago initiative, Artificial Intelligence and Law in New Zealand, is funded in part by the Law Foundation and is a three year multi-disciplinary project involving the Law Faculty and the Philosophy and Computer Science faculties.

Project team leader is Law Faculty Associate Professor Colin Gavaghan (pictured above and right) who recognizes the fascinating practical and ethical implications of artificial intelligence, and the potential dangers they pose.
The Public Use of AI
Part of the focus of the research is to examine the use of predictive analytics or algorithms in the public sector, which are already being used in the New Zealand criminal justice system, immigration, IRD and ACC.
Their use and continued development are some of the issues to be examined and Professor Gavaghan is quick to point out that they are not developments that he is opposed to, noting that such tools are particularly well suited to processing large quantities of data in a manner well beyond what most human brains can accomplish.
“But there are also concerns about the use of algorithms in these contexts. Some of these relate to the possibility of bias creeping into the system – either through the use of biased training or input data, or by some other means.
“This wouldn’t need to be deliberate, but without detailed and fairly regular checking, it’s a legitimate concern. Just think of the concern in the US about the use of the COMPAS tool in criminal justice decisions, and the revelations that it was far more likely to be predict that black prisoners would re-offend than their white counterparts,” he told LawFuel.
Transparency Issues
Transparency is another issue of concern to the team. “How can you challenge a decision if neither you nor anyone else understands the basis on which it was made?
“This might not be such an issue with the sorts of tools being used at the moment, where the variables and weightings are fairly visible, but as we move down the road into the machine learning proper, it could become extremely difficult for anyone to really understand what’s going on inside the ‘black box.'”
The question of how to handle these ethical and legal challenges are those confronting other jurisdictions. The research has seen the team speaking to leading figures in AI issues in the EU and US as to what they suggest should be done to handle them.
“The challenge, then, is how to get the best out of using these tools while avoiding the possible pitfalls.” Among the suggestions –
“For instance, the idea of always having a ‘human in the loop’ might sound reassuring, but there are doubts as to how well human beings can operate alongside high-tech systems. Will we become subject to ‘automation bias’, deferring to the machine rather than properly scrutinising it? Will we suffer from ‘decisinal atrophy’ when we’re only rarely called upon to exercise certain skills?”
The issues may be vexed, but they are also vital. A ‘solution’ of some kind is necessary. “Our present suspicion is that some combination of these measures might be needed to alleviate the concerns around automated decisions, but we’re still a bit away from making our final recommendations.
We’ve been very pleased, though, to be approached by Government with a view to providing expert advice on this issue.”
The discussion on artificial intelligence and its ethical and other issues will continue to exercise human brains for some time – maybe until the robots do take over.
See Also: Legal AI: Coming Read or Not
>> The Responsible Deployment of Artificial Intelligence
>> Artificial Intelligence and the Future of Legal Services