Type to search

Data Subject Rights

Right Not to Be Subjected to Automated Decision-Making Explained: How to Challenge Decisions Made by Algorithms

Share
Right Not to Be Subjected to Automated Decision-Making

This article is part of our Data Subject Rights series, explaining individual rights under NDPA, GDPR, and global data protection laws.

Algorithms increasingly decide who gets a loan, a job interview, insurance coverage, social media visibility, or even access to essential services. While automation can improve efficiency, it can also introduce serious risks — including bias, lack of transparency, and unfair outcomes. The Right Not to Be Subjected to Automated Decision-Making exists to protect individuals from decisions made solely by machines where those decisions have significant legal or personal consequences.

This article provides a comprehensive, practical explanation of this right under the Nigeria Data Protection Act (NDPA) and the GDPR, including when it applies, how to exercise it, real-world examples, limits, and what to do if an organization refuses to comply.

What Is the Right Not to Be Subjected to Automated Decision-Making?

The Right Not to Be Subjected to Automated Decision-Making allows individuals to object to, challenge, or request human involvement in decisions that are made solely by automated means — including algorithms, artificial intelligence (AI), and profiling systems — when those decisions produce legal effects or similarly significant impacts on them.

In simple terms, this right ensures that:

  • Important decisions about you are not made by machines alone
  • You can demand human review, explanation, and intervention
  • Organizations cannot hide behind algorithms to justify harmful outcomes

GDPR Perspective

Under Article 22 GDPR, individuals have the right not to be subject to decisions based solely on automated processing, including profiling, where the decision:

  • Produces legal effects (e.g., denial of credit or employment), or
  • Significantly affects the individual in a comparable way

There are limited exceptions, but even then, safeguards must exist — including the right to human intervention and the right to contest the decision. (gdprinfo.eu)

NDPA Perspective (Nigeria)

The Nigeria Data Protection Act (NDPA) aligns with this principle by requiring:

  • Fairness and transparency in data processing
  • Protection against decisions that cause unjustified harm
  • Accountability for automated systems used in profiling and decision-making

Organizations must ensure automated systems do not undermine individual rights and must provide mechanisms for review and redress. (ndpc.gov.ng)

What Counts as Automated Decision-Making?

Not all automated processes trigger this right. The key test is whether the decision is:

  1. Fully automated (no meaningful human involvement), and
  2. Legally or significantly impactful

Examples That Usually Qualify

ScenarioWhy It Qualifies
Loan or credit approvalAffects financial rights
Job application screeningImpacts employment opportunities
Insurance risk scoringInfluences coverage and pricing
Digital lending blacklistsRestricts access to services
Automated account suspensionAffects access and reputation

Examples That Usually Do NOT Qualify

  • Spam filtering
  • Product recommendations
  • Website personalization
  • Chatbot responses

These are generally low-impact and do not significantly affect rights or freedoms.

Profiling and Automated Decisions: What’s the Difference?

TermExplanation
ProfilingAutomated analysis to predict behavior, preferences, or risks
Automated decision-makingA final decision made without human involvement
Significant effectMaterial impact on rights, finances, access, or reputation

Profiling alone is not always prohibited — but profiling that leads to automated decisions with serious effects triggers this right.

Real-World Examples and Case-Style Scenarios

Example 1: Automated Loan Rejection

A fintech app automatically denies a loan based on algorithmic scoring, without any human review. The user invokes their right, demanding human reassessment and explanation of the criteria used.

Example 2: Job Application Filtering

An AI system screens CVs and rejects candidates automatically. An applicant requests human intervention after suspecting bias or unfair exclusion.

Example 3: Insurance Pricing Algorithms

A customer receives an unusually high premium calculated entirely by an algorithm. They request manual review and justification.

Example 4: Social Media Account Ban

An account is suspended automatically due to algorithmic moderation. The user challenges the decision and requests human oversight.

These scenarios illustrate why regulators treat automated decision-making as a high-risk processing activity. (gdprinfo.eu)

When Automated Decisions Are Allowed (Exceptions)

Organizations may rely on automated decision-making only if one of the following applies:

ExceptionCondition
Contract necessityRequired to perform a contract
Legal authorizationPermitted by law with safeguards
Explicit consentYou clearly agreed to it

Even in these cases, organizations must implement safeguards such as:

  • Human intervention
  • Ability to express your point of view
  • Right to contest the decision

How to Exercise Your Rights Effectively

  1. Identify the Automated Decision
    Confirm that the decision was made without meaningful human involvement.
  2. Request Human Review
    Ask for manual reassessment by a qualified individual.
  3. Ask for an Explanation
    Request information about the logic involved, factors considered, and data sources used.
  4. Challenge the Outcome
    Present additional information or context that the algorithm may have ignored.
  5. Document Everything
    Keep records of communications and responses.

What Organizations Must Provide

ObligationDescription
TransparencyExplain automated decision logic clearly
Human interventionProvide meaningful human review
FairnessPrevent bias and discrimination
AccountabilityJustify outcomes and correct errors

Failure to meet these obligations may lead to regulatory enforcement.

Risks of Unchecked Automated Decision-Making

  • Algorithmic bias and discrimination
  • Lack of accountability
  • Inability to explain decisions
  • Systemic exclusion of vulnerable groups

Studies by European regulators show that automated systems can amplify existing social inequalities if left unchecked. (gdprinfo.eu)

Frequently Asked Questions (FAQs)

Q1. Can organizations use AI to make decisions about me?
Yes, but not solely and not without safeguards if the decision significantly affects you.

Q2. What is “meaningful human involvement”?
A real person must have authority to review, change, or overturn the decision — not just rubber-stamp it.

Q3. Does this right apply in Nigeria?
Yes. NDPA supports safeguards against unfair automated decision-making. (ndpc.gov.ng)

Q4. Can I complain if my request is ignored?
Yes. You may escalate to the Nigeria Data Protection Commission or pursue legal remedies. (gdprinfo.eu)

Why This Right Matters Today

As AI and automated systems expand across finance, employment, healthcare, and digital platforms, this right acts as a critical safeguard against invisible injustice. It ensures technology serves people — not the other way around.

Final Thoughts

The Right Not to Be Subjected to Automated Decision-Making protects individuals from being reduced to data points in opaque systems. Under the NDPA and GDPR, organizations must place human judgment, fairness, and accountability at the center of high-impact decisions.

Understanding and exercising this right allows you to challenge unfair outcomes, demand transparency, and ensure that algorithms do not silently determine your future. In a world increasingly shaped by machines, this right preserves a fundamental truth: decisions about people should not be made by machines alone.

Tags:
Ikeh James Certified Data Protection Officer (CDPO) | NDPC-Accredited

Ikeh James Ifeanyichukwu is a Certified Data Protection Officer (CDPO) accredited by the Institute of Information Management (IIM) in collaboration with the Nigeria Data Protection Commission (NDPC). With years of experience supporting organizations in data protection compliance, privacy risk management, and NDPA implementation, he is committed to advancing responsible data governance and building digital trust in Africa and beyond. In addition to his privacy and compliance expertise, James is a Certified IT Expert, Data Analyst, and Web Developer, with proven skills in programming, digital marketing, and cybersecurity awareness. He has a background in Statistics (Yabatech) and has earned multiple certifications in Python, PHP, SEO, Digital Marketing, and Information Security from recognized local and international institutions. James has been recognized for his contributions to technology and data protection, including the Best Employee Award at DKIPPI (2021) and the Outstanding Student Award at GIZ/LSETF Skills & Mentorship Training (2019). At Privacy Needle, he leverages his diverse expertise to break down complex data privacy and cybersecurity issues into clear, actionable insights for businesses, professionals, and individuals navigating today’s digital world.

  • 1

You Might also Like

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Rating

This site uses Akismet to reduce spam. Learn how your comment data is processed.