Right Not to Be Subjected to Automated Decision-Making Explained: How to Challenge Decisions Made by Algorithms
Share
This article is part of our Data Subject Rights series, explaining individual rights under NDPA, GDPR, and global data protection laws.
Algorithms increasingly decide who gets a loan, a job interview, insurance coverage, social media visibility, or even access to essential services. While automation can improve efficiency, it can also introduce serious risks — including bias, lack of transparency, and unfair outcomes. The Right Not to Be Subjected to Automated Decision-Making exists to protect individuals from decisions made solely by machines where those decisions have significant legal or personal consequences.
This article provides a comprehensive, practical explanation of this right under the Nigeria Data Protection Act (NDPA) and the GDPR, including when it applies, how to exercise it, real-world examples, limits, and what to do if an organization refuses to comply.
What Is the Right Not to Be Subjected to Automated Decision-Making?
The Right Not to Be Subjected to Automated Decision-Making allows individuals to object to, challenge, or request human involvement in decisions that are made solely by automated means — including algorithms, artificial intelligence (AI), and profiling systems — when those decisions produce legal effects or similarly significant impacts on them.
In simple terms, this right ensures that:
- Important decisions about you are not made by machines alone
- You can demand human review, explanation, and intervention
- Organizations cannot hide behind algorithms to justify harmful outcomes
Legal Basis Under GDPR and NDPA
GDPR Perspective
Under Article 22 GDPR, individuals have the right not to be subject to decisions based solely on automated processing, including profiling, where the decision:
- Produces legal effects (e.g., denial of credit or employment), or
- Significantly affects the individual in a comparable way
There are limited exceptions, but even then, safeguards must exist — including the right to human intervention and the right to contest the decision. (gdprinfo.eu)
NDPA Perspective (Nigeria)
The Nigeria Data Protection Act (NDPA) aligns with this principle by requiring:
- Fairness and transparency in data processing
- Protection against decisions that cause unjustified harm
- Accountability for automated systems used in profiling and decision-making
Organizations must ensure automated systems do not undermine individual rights and must provide mechanisms for review and redress. (ndpc.gov.ng)
What Counts as Automated Decision-Making?
Not all automated processes trigger this right. The key test is whether the decision is:
- Fully automated (no meaningful human involvement), and
- Legally or significantly impactful
Examples That Usually Qualify
| Scenario | Why It Qualifies |
|---|---|
| Loan or credit approval | Affects financial rights |
| Job application screening | Impacts employment opportunities |
| Insurance risk scoring | Influences coverage and pricing |
| Digital lending blacklists | Restricts access to services |
| Automated account suspension | Affects access and reputation |
Examples That Usually Do NOT Qualify
- Spam filtering
- Product recommendations
- Website personalization
- Chatbot responses
These are generally low-impact and do not significantly affect rights or freedoms.
Profiling and Automated Decisions: What’s the Difference?
| Term | Explanation |
|---|---|
| Profiling | Automated analysis to predict behavior, preferences, or risks |
| Automated decision-making | A final decision made without human involvement |
| Significant effect | Material impact on rights, finances, access, or reputation |
Profiling alone is not always prohibited — but profiling that leads to automated decisions with serious effects triggers this right.
Real-World Examples and Case-Style Scenarios
Example 1: Automated Loan Rejection
A fintech app automatically denies a loan based on algorithmic scoring, without any human review. The user invokes their right, demanding human reassessment and explanation of the criteria used.
Example 2: Job Application Filtering
An AI system screens CVs and rejects candidates automatically. An applicant requests human intervention after suspecting bias or unfair exclusion.
Example 3: Insurance Pricing Algorithms
A customer receives an unusually high premium calculated entirely by an algorithm. They request manual review and justification.
Example 4: Social Media Account Ban
An account is suspended automatically due to algorithmic moderation. The user challenges the decision and requests human oversight.
These scenarios illustrate why regulators treat automated decision-making as a high-risk processing activity. (gdprinfo.eu)
When Automated Decisions Are Allowed (Exceptions)
Organizations may rely on automated decision-making only if one of the following applies:
| Exception | Condition |
|---|---|
| Contract necessity | Required to perform a contract |
| Legal authorization | Permitted by law with safeguards |
| Explicit consent | You clearly agreed to it |
Even in these cases, organizations must implement safeguards such as:
- Human intervention
- Ability to express your point of view
- Right to contest the decision
How to Exercise Your Rights Effectively
- Identify the Automated Decision
Confirm that the decision was made without meaningful human involvement. - Request Human Review
Ask for manual reassessment by a qualified individual. - Ask for an Explanation
Request information about the logic involved, factors considered, and data sources used. - Challenge the Outcome
Present additional information or context that the algorithm may have ignored. - Document Everything
Keep records of communications and responses.
What Organizations Must Provide
| Obligation | Description |
|---|---|
| Transparency | Explain automated decision logic clearly |
| Human intervention | Provide meaningful human review |
| Fairness | Prevent bias and discrimination |
| Accountability | Justify outcomes and correct errors |
Failure to meet these obligations may lead to regulatory enforcement.
Risks of Unchecked Automated Decision-Making
- Algorithmic bias and discrimination
- Lack of accountability
- Inability to explain decisions
- Systemic exclusion of vulnerable groups
Studies by European regulators show that automated systems can amplify existing social inequalities if left unchecked. (gdprinfo.eu)
Frequently Asked Questions (FAQs)
Q1. Can organizations use AI to make decisions about me?
Yes, but not solely and not without safeguards if the decision significantly affects you.
Q2. What is “meaningful human involvement”?
A real person must have authority to review, change, or overturn the decision — not just rubber-stamp it.
Q3. Does this right apply in Nigeria?
Yes. NDPA supports safeguards against unfair automated decision-making. (ndpc.gov.ng)
Q4. Can I complain if my request is ignored?
Yes. You may escalate to the Nigeria Data Protection Commission or pursue legal remedies. (gdprinfo.eu)
Why This Right Matters Today
As AI and automated systems expand across finance, employment, healthcare, and digital platforms, this right acts as a critical safeguard against invisible injustice. It ensures technology serves people — not the other way around.
Final Thoughts
The Right Not to Be Subjected to Automated Decision-Making protects individuals from being reduced to data points in opaque systems. Under the NDPA and GDPR, organizations must place human judgment, fairness, and accountability at the center of high-impact decisions.
Understanding and exercising this right allows you to challenge unfair outcomes, demand transparency, and ensure that algorithms do not silently determine your future. In a world increasingly shaped by machines, this right preserves a fundamental truth: decisions about people should not be made by machines alone.




Leave a Reply