Rotterdam school of Management, Erasmus University compact logo

Article: Tuesday, 14 January 2025

Imagine being a single mother of three, struggling to make ends meet as health issues force you out of work and lead you into financial hardship. But instead of finding support, you are met with suspicion. An algorithm labels you as ‘fraudulent’ and flags you as a potential risk. Suddenly, doors start closing – you’re denied debt relief, turned away from job opportunities, and find yourself trapped in a digital limbo with no easy way to clear your name. This is the reality for one woman whose story highlights a growing issue: the power of AI to make life-altering decisions based on flawed systems or incomplete data. In her case, a single algorithmic label had devastating consequences, complicating her access to legal recourse and undermining her ability to secure a fair trial. The implications are clear: when AI goes wrong it fails real people.

AI is trying to reverse the burden of proof

These insights emerged from a roundtable discussion at the Reshaping Work 2024 Conference in Amsterdam, where experts gathered to dissect the misuse of AI in government decision-making. A central case study was the recent Dutch Child Benefits Scandal (Kinder Toeslag), when AI systems were used to detect fraud. However, instead of safeguarding public funds, these systems disproportionately flagged vulnerable citizens – particularly those facing health challenges – as fraud risks. The AI’s opaqueness made it nearly impossible for these individuals to challenge the decisions made against them, creating a system where proving one's innocence became a Kafkaesque nightmare. In recent developments, the Dutch Ministry of Finance initiated conversations about compensating those affected by these wrongful labels. However, this move raises further questions: how can governments ensure AI systems do not unfairly target vulnerable citizens in the first place? And how can individuals be empowered to challenge AI-driven decisions that impact their lives?

Beyond this specific case, the broader implications of AI misuse by governments  are becoming increasingly evident:

  • Lack of recourse: while there are mechanisms for citizens to seek compensation for wrongful AI-based decisions, these processes remain difficult to access.
  • Shifting burden of proof: AI systems are effectively transferring the burden of proof onto citizens, who are left to manage complex procedures to clear their names.

 Governments must take proactive measures to address the shortcomings of AI systems to protect citizens from unintended harm. Here, we outline our discussion of concrete strategies for improving transparency, accountability, and citizen empowerment to prevent reoccurrence of cases like the Dutch Child Benefits Scandal.

1. Increase transparency in AI systems
  • Public AI registers: create accessible registers detailing AI systems used in public decision-making, including their purpose and criteria.
  • Citizen notifications: notify individuals when an AI flags them or makes significant decisions affecting their lives.
  • Open algorithm scrutiny: allow third party experts or citizen panels to audit AI systems to identify biases and errors.
2. Ensure built-In human oversight
  • Human review as a failsafe: require human checks before finalizing critical decisions that affect resources or legal rights.
  • Manual review for sensitive cases: implement additional human verification for sensitive topics like fraud detection to prevent unfair targeting.
3. Strengthen accountability mechanisms
  • Algorithmic accountability laws: strengthen regulations including compensation regulations, holding governments and developers accountable for AI-caused harm.
  • Clear redress processes: develop accessible channels for citizens to challenge AI decisions, with legal support to navigate these systems.
  • Independent oversight: establish bodies to audit AI use in public services, ensuring compliance and ethical standards.
4. Design AI with ethical standards
  • Bias testing and correction: regularly test AI for biases, particularly those affecting vulnerable groups, and address issues promptly.
  • Inclusive design: engage diverse stakeholders, including affected communities, in the AI design process to enhance fairness and inclusivity.
5. Empower citizens through education and resources
  • Digital literacy campaigns: educate the public on their AI rights and protection mechanisms, focusing on marginalized communities.
  • Support for Legal Aid: fund organisations specializing in AI rights to assist individuals in contesting wrongful AI outcomes.
6. Promote international standards and collaboration
  • Align with GDPR & AI liability laws: Strengthen global frameworks for AI accountability, leveraging existing EU regulations.
  • Share best practices: foster international collaboration to create consistent guidelines on ethical AI use and accountability.

Algorithms are increasingly determining citizens' lives. By adopting these recommendations governments can ensure that AI serves the public good rather than becoming a tool of discrimination and injustice. The path forward requires bold action to regulate and oversee AI systems. Only then can we prevent AI from perpetuating or even worsening social inequalities and ensure that it genuinely benefits all members of society.

Eric aan de Stegge

Attorney, JAW Advocaten

Tomislav Karacic

Assistant Professor, London School of Economics

Writer and global policy expert based in Berlin, and Reshaping Work Fellow

Angela Samson

Executive Member of the Christian National Trade Union Federation (CNV) Member Council and a government advisor, and target group member of CNV’s Anders Actieven

RSM Discovery

Want to elevate your business to the next level using the latest research? RSM Discovery is your online research platform. Read the latest insights from the best researchers in the field of business. You can also subscribe to the newsletter to receive a bimonthly highlight with the most popular articles.
Your contact for more information:
Danielle Baan

Science Communication and Media Officer

Portrait of Erika Harriford-McLaren
Erika Harriford-McLaren

Corporate Communications & PR Manager

Erasmus University campus in autumn, showcasing its iconic red trees, viewed from across the campus pool.