10.6 C
London
Saturday, April 20, 2024

The Fraud-Detection Business Has a Dirty Secret

The algorithm’s impact on Serbia’s Roma community has been dramatic. ​​Ahmetović says his sister has also had her welfare payments cut since the system was introduced, as have several of his neighbors. “Almost all people living in Roma settlements in some municipalities lost their benefits,” says Danilo Ćurčić, program coordinator of A11, a Serbian nonprofit that provides legal aid. A11 is trying to help the Ahmetovićs and more than 100 other Roma families reclaim their benefits.

But first, Ćurčić needs to know how the system works. So far, the government has denied his requests to share the source code on intellectual property grounds, claiming it would violate the contract they signed with the company who actually built the system, he says. According to Ćurčić and a government contract, a Serbian company called Saga, which specializes in automation, was involved in building the social card system. Neither Saga nor Serbia’s Ministry of Social Affairs responded to WIRED’s requests for comment.

As the govtech sector has grown, so has the number of companies selling systems to detect fraud. And not all of them are local startups like Saga. Accenture—Ireland’s biggest public company, which employs more than half a million people worldwide—has worked on fraud systems across Europe. In 2017, Accenture helped the Dutch city of Rotterdam develop a system that calculates risk scores for every welfare recipient. A company document describing the original project, obtained by Lighthouse Reports and WIRED, references an Accenture-built machine learning system that combed through data on thousands of people to judge how likely each of them was to commit welfare fraud. “The city could then sort welfare recipients in order of risk of illegitimacy, so that highest risk individuals can be investigated first,” the document says. 

Officials in Rotterdam have said Accenture’s system was used until 2018, when a team at Rotterdam’s Research and Business Intelligence Department took over the algorithm’s development. When Lighthouse Reports and WIRED analyzed a 2021 version of Rotterdam’s fraud algorithm, it became clear that the system discriminates on the basis of race and gender. And around 70 percent of the variables in the 2021 system—information categories such as gender, spoken language, and mental health history that the algorithm used to calculate how likely a person was to commit welfare fraud—appeared to be the same as those in Accenture’s version.

When asked about the similarities, Accenture spokesperson Chinedu Udezue said the company’s “start-up model” was transferred to the city in 2018 when the contract ended. Rotterdam stopped using the algorithm in 2021, after auditors found that the data it used risked creating biased results.

Consultancies generally implement predictive analytics models and then leave after six or eight months, says Sheils, Accenture’s European head of public service. He says his team helps governments avoid what he describes as the industry’s curse: “false positives,” Sheils’ term for life-ruining occurrences of an algorithm incorrectly flagging an innocent person for investigation. “That may seem like a very clinical way of looking at it, but technically speaking, that’s all they are.” Sheils claims that Accenture mitigates this by encouraging clients to use AI or machine learning to improve, rather than replace, decision-making humans. “That means ensuring that citizens don’t experience significantly adverse consequences purely on the basis of an AI decision.” 

However, social workers who are asked to investigate people flagged by these systems before making a final decision aren’t necessarily exercising independent judgment, says Eva Blum-Dumontet, a tech policy consultant who researched algorithms in the UK welfare system for campaign group Privacy International. “This human is still going to be influenced by the decision of the AI,” she says. “Having a human in the loop doesn’t mean that the human has the time, the training, or the capacity to question the decision.” 

Latest news
Related news