Skip to main content

UN: Protect Rights in Welfare Systems’ Tech Overhaul

Automating Benefits, Services Threatens Welfare Rights, Privacy

Demonstrators march outside the US Capitol during the Poor People's Campaign rally in Washington, DC, June 23, 2018.  © 2018 AP Photo/Jose Luis Magana

(New York) – Governments should heed the call of the United Nations’ leading expert on poverty to fully integrate human rights protections into their efforts to digitize and automate welfare benefits and services, seven human rights groups said today.

In a report released this week, the UN special rapporteur on extreme poverty and human rights, Philip Alston, warns that the rapid digitization and automation of welfare systems is hurting the poorest and most vulnerable people in society. Although governments have pledged to use these technologies to create more equitable and inclusive welfare programs, Alston found that the technologies have been used in ways that “surveil, target, harass, and punish beneficiaries.”  

“The UN expert’s findings show that automating welfare services poses unique and unprecedented threats to welfare rights and privacy,” said Amos Toh, senior artificial intelligence and human rights researcher at Human Rights Watch. “Using technology to administer welfare has risks and is not a panacea for rights-based reforms that safeguard the dignity and autonomy of society’s most vulnerable people.”

The human rights groups are Access Now, AlgorithmWatch, Amnesty International, Child Poverty Action Group, Human Rights Watch, Irish Council for Civil Liberties, and Privacy International.

In the first global UN survey of digital welfare systems, Alston found that governments increasingly rely on automated decision-making and other data-driven technologies to verify the identity of welfare beneficiaries, assess their eligibility for various services, calculate benefit amounts, and detect welfare fraud. But the use of these technologies can create serious harm to human rights, the groups said.

The automation of key welfare functions without sufficient transparency, due process, and accountability raises the specter of mass violations of welfare rights. In the United Kingdom, errors in the Real Time Information System, which calculates benefits payments based on earnings information reported to the tax authority, have caused potentially catastrophic delays and reductions in benefit payments for impoverished families. Design flaws in automated fraud detection systems in Australia and the United States have also triggered debt notices to scores of beneficiaries, wrongfully accusing them of welfare fraud.

“Automated decision-making should be made more transparent, as highlighted by the rapporteur, in three important ways,” said Matthias Spielkamp, executive director of AlgorithmWatch. “Citizens need to be able to understand what policies are implemented using algorithms and automation. The administration has to keep a register of all complex automation processes it uses that directly affect citizens. Also, there needs to be transparency of responsibility, so that people know who to contact to challenge a decision.”

The development of digital identity systems to screen welfare beneficiaries also increases the risk of unnecessary and disproportionate surveillance, and attendant risks to people’s security. In India, Human Rights Watch has found that the government’s mandatory biometric identification project, Aadhaar, imposes invasive data collection requirements as a condition for getting allotments of subsidized food grains and other essential public services.

In Kenya, Amnesty International has raised concern about the lack of adequate privacy protections and independent oversight in the national biometric identification system, Huduma Namba. Registration with the system is a condition of accessing welfare benefits and other government services.

“Before forcing entire populations into using digital identity programs, governments must first ask themselves, ‘Why ID?’, and prove these systems are necessary and will actually provide the intended benefits,” said Peter Micek, general counsel at Access Now. “With ill-conceived digital identity programs, authorities force communities to give up fundamental rights such as privacy in exchange for their rights to food, shelter, and well-being.”

Alston found that government agencies around the world undertake a wide range of “crucial decisions to go digital” without meaningful transparency or even a legal basis for doing so, denying “opportunities for legislative debate and for public inputs into shaping the relevant systems.”

“Independent oversight findings of illegality are being completely ignored,” said Elizabeth Farries, information rights program manager of the Irish Council for Civil Liberties. “In Ireland, the government refuses to halt its compulsory rollout of the biometric Public Services Card for a wide range of services, despite being ordered to stop by the Irish Data Protection Commissioner.” 

The groups also endorsed the UN expert’s recommendation that governments should establish laws ensuring that the private sector incorporates transparency, accountability, and other human rights safeguards in the development and sale of technologies to facilitate the delivery of welfare services and benefits.

"Whilst governments have been the ones increasingly pushing for digital welfare policies, we must also consider the other stakeholders driving this agenda,” said Alexandrine Pirlot de Corbion, director of strategy at Privacy International. “As pointed out by the special rapporteur, the private sector plays a role, but we maintain that we must also address the role of investment structures, such as the World Bank and the World Economic Forum, and leading funders. All those in the ecosystem must be held to account to protect people and to ensure they can live with dignity and autonomy, free from undue surveillance and exploitation. ”
 

Your tax deductible gift can help stop human rights violations and save lives around the world.