ISABEL RUBIO ARROYO | Tungsteno
Facial recognition has the potential to identify spies or deceased people and help refugees reunite with their families in wars, but it also carries some risks with potentially catastrophic consequences. In the face of the Russian invasion, the Ukrainian Ministry of Defence has turned to a database of 20 billion faces from the company Clearview AI. Having examined how satellites are mapping the conflict in Ukraine and what technologies are helping to replace Russian gas, we now look at the pros and cons of using facial recognition in warfare.
Technology to verify identities in war
When Russia invaded Ukraine, Hoan Ton-That, the CEO of Clearview AI, started thinking about how he could help in the conflict. "I remember seeing videos of captured Russian soldiers and Russia claiming they were actors," Ton-That told The New York Times. That's when he thought that with his facial recognition technology, the Ukrainians could verify the identity of potential spies and the deceased, as well as help reunite refugees with their families.
After offering Clearview AI's services to Ukraine for free, he created more than 200 accounts for users in five Ukrainian government agencies and translated his application into their language. Ukraine has since used the technology to identify Russian soldiers, alive or dead, and to verify that travellers in Ukraine are who they claim to be. Ukraine's strategies include identifying dead Russian soldiers and notifying their relatives. Ukrainian Deputy Prime Minister Mykhailo Fedorov believes this is the best way to make the Russian public aware of the cost of the conflict and "dispel the myth of a 'special operation’ in which 'there are no conscripts' and 'no one dies’".
Ukraine uses facial recognition to identify dead Russian soldiers. Credit: DW Shift.
About 14 photos for every person on Earth
In the words of its creator, Clearview AI would be something like "a search engine for faces". "It kind of works like Google. But instead of putting in a string of words or text, the user puts in a photo of a face," explains Ton-That. As of today, the system has 20 billion faces and a large database of Russian citizens thanks to VK, the Russian Facebook.
Every day the company collects more and more images. In fact, it has indicated to its investors that by early 2023 it expects to have 100 billion photos of faces, enough to ensure that "almost everyone in the world will be identifiable", according to a financial presentation accessed by The Washington Post. Those images equate to 14 photos for each of the seven billion people on Earth. Clearview AI explains that all of these images come from public websites, media outlets, mugshot websites, public social networks and other open sources.
All of this information, according to the company, "enables quicker identifications and apprehensions to help solve and prevent crimes, helping to make our communities safer". But its techniques may violate users' privacy rights. In fact, the UK Information Commissioner’s Office (ICO) fined Clearview AI more than £7.5 million (about 8.7 million euros) in May for collecting images from the internet to create a global facial recognition database. The agency has ordered the company to stop obtaining and using the personal data of UK residents that is publicly available on the web.
ClearView AI has 20 billion faces and a large database of Russian citizens. Credit: NOVA PBS Official.
A double-edged sword
As for the effectiveness of facial recognition software in warfare, there is conflicting evidence, as Felipe Romero Moreno, a lecturer at the University of Hertfordshire School of Law, tells The Conversation. Some studies indicate that the technology can identify deceased people as well as, or better than, a human. However, according to the US Department of Energy, the decomposition of a deceased person's face can reduce the accuracy of the program.
In addition, as Romero points out, some research suggests that fingerprints, dental records and DNA remain the most reliable identification techniques. "But these are tools for trained professionals, while facial recognition can be used by non-experts," says Romero. It should also be noted that this technology sometimes fails, so it can mistakenly match two images or mismatch photos of the same person. In Ukraine, the consequences of any possible mistakes with artificial intelligence could be disastrous: "An innocent civilian could be killed if they are misidentified as a Russian soldier." Clearview AI insists that its tool should complement, but not replace, human decision-making.
Facial recognition systems often apply a number of biases and sometimes fail. Credit: NEC.
Moreover, if the technology can be used to identify both living and dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. As philosophers Darian Meacham and Martin Gak, who research ethics in conflict zones, point out: "This is not a remote possibility". Just last year, the UN reported that an autonomous drone may have killed people in Libya in 2020, and there are suspicions that autonomous weapons are being used in the war in Ukraine.
While facial recognition technology has enormous potential in such conflicts to identify individuals, the risks involved should not be overlooked. As Conor Healy, a facial recognition expert at surveillance technology research group IPVM, told the BBC, it is important for Ukrainian forces to recognise that it is "not a 100% accurate way of determining whether someone is your friend or your foe". "It shouldn’t be a life-or-death technology where you either pass or fail, where you could get imprisoned or, god forbid, even killed," he concludes.
· — —
Tungsteno is a journalism laboratory to scan the essence of innovation. Devised by Materia Publicaciones Científicas for Sacyr’s blog.