FRANCESCO RODELLA | Tungsteno
On the eastern edge of Europe, there is a place where the rhythm of technological transformation is particularly rapid: Estonia, a country that has already pioneered the launch of virtual public services such as Internet voting or 100% digital management of administrative procedures. Now, the government is also considering using artificial intelligence (AI) to assist judges, with the aim of alleviating their workload and automating some processes. This is one of the most striking current projects to implement algorithms into public decision-making, an idea that fascinates, but also raises ethical questions: Will we be able to use machines to improve our societies without them escaping our control?
Estonia's plan for its judicial system became known last year. The digital magazine Wired explained that the intention of officials in this Baltic country is to implement AI with pilot tests to resolve disputes of less than 7,000 euros in value, thereby reducing the backlog of cases in the courts. The basic operation of the system would be as follows: the parties upload the documentation to a digital platform, the algorithm studies the case and issues a ruling, which can then be appealed before a human judge.
But can we talk about bona fide "robot judges", as some suggest? Viljar Peep, a senior official in the Estonian Ministry of Justice, denies this. "The expression 'AI judge' is misleading," he says. "We are expanding the automation of court procedures, which includes the use of AI. We can talk about a robot assistant to the judge, but not a robot-judge. AI will never replace a judge."
The algorithms, according to the plans of the Estonian Ministry of Justice, will serve to "simplify work processes," for example, creating transcripts of hearings. More generally, the idea is to "move from document processing to data processing," Peep says. "As a result, the machine will be able to read the work of judges and court clerks," he adds. "These changes will give us new opportunities to analyse the information."
The algorithms that have already been tested to calculate the risk of criminal recidivism or attempt to predict crimes, show important racial biases. Credit: Wikimedia Commons.
A "global" movement
This project, for which Estonia has not yet set official timelines, is not the only one in the public domain. "There is a worldwide movement in the use of AI techniques for public administrations," says Nuria Oliver, Ph.D holder from the Media Lab at MIT (Massachusetts Institute of Technology). Among the most developed fields at present, as she indicates, are those of justice and public security, for example to calculate the risk of recidivism or to try to predict crimes, and of health, an area in which algorithms can analyse enormous amounts of medical information to make diagnoses, among other things.
It is precisely the availability of large "unstructured" databases, such as "images, videos, text, sensor data or medical tests," that is one of the three key ingredients to boost the use of AI in these areas, says Oliver. If this element is associated with the growing availability of "low-cost computing" and "highly complex and sophisticated computer-based learning methods", it opens up a "great opportunity" to "help us make decisions that affect thousands or millions of people," she adds.
What advantages do machines bring to us in such sensitive contexts? "History often reminds us of the fact that we humans are not perfect at making decisions," answers Oliver. "Algorithms are in principle not susceptible to corruption, they aren’t selfish, they don’t get tired, they don’t have a bad day."
With the introduction of AI into the judicial system, institutions such as the European Commission have set the ethical principles by which this activity should be governed to avoid biases and set limits. Credit: Markus Spiske.
The great ethical dilemma
That is the theory, but it is not always so simple in practice. In 2016, a controversial case that became famous came to light, that of COMPAS, an algorithm used in U.S. courts to predetermine the probability of a suspect reoffending, thereby providing more information to evaluate in each case. An investigative report by ProPublica showed that the program was racially biased: black people were given a higher than correct rate of recidivism in several cases, and white people were assessed with a lower than correct risk.
Cases of this type have also emerged more recently: a study published last October in the journal Science highlighted, for example, the racial biases detrimental to blacks in a programme widely used in the US health system to determine which patients need extra resources. Another application that raises concern is the massive use of AI-based facial recognition to monitor the population in China, a tool that has been documented in journalistic investigations as also serving the government in repressing ethnic and religious minorities. Meanwhile, this technology is rapidly gaining ground in Europe as well.
In certain contexts, "the algorithms learn or even maximize existing biases in society," summarizes Oliver, who adds that problems can arise in different ways: baseline data that unfairly represent the groups involved (there can also be gender bias), opaque software for which it is unknown how they have obtained a certain result, violations of personal information from public data, and interference from untruthful content are all plausible possibilities, according to the expert.
Some institutions are taking this into account. One of them is the European Commission, which sets out in a guide the ethical principles needed to avoid such problems, and even poses limitations. Among its criteria, it indicates the importance of human oversight, the need to take into account the diversity of social groups involved and "the accountability of AI systems and their results."
These instructions, together with those of the Council of Europe specific to the judicial sphere, are what will guide the Estonian government in the implementation of AI in the courts, according to ministerial sources in the Baltic country. "Both documents emphasize the need to thoroughly assess the influences on basic rights before using AI systems," they explain. "The training of these systems should be carefully monitored at all times."
The research world is also looking to seek reassuring answers to the ethical dilemmas that are causing concern, says Oliver, who says he has "no doubt" that the positive impact of new technologies in the public sphere can be "enormous." One of the most interesting perspectives, he suggests, is their application "in situations where the traditional provision of public services is very poor," such as in "developing countries or rural areas."
· — —
Tungsteno is a journalism laboratory to scan the essence of innovation. Devised by Materia Publicaciones Científicas for Sacyr’s blog.