
Why African Countries Are Using Data Protection Laws to Regulate AI
African countries are using data protection laws to regulate AI, creating faster and locally adapted policies as artificial intelligence adoption grows.
Umar Mayowa | 20 Mar. 2026

Instead of waiting for dedicated AI laws, governments across Africa are turning to existing data protection frameworks to manage how artificial intelligence is used.
As artificial intelligence spreads across industries, many African countries are choosing a practical route to oversight. Rather than building entirely new legal systems, they are expanding data protection laws to address how AI systems collect, process, and use personal information.
This approach allows policymakers to respond faster to real-world issues without waiting years for comprehensive AI legislation. It also builds on legal structures that already exist, making enforcement more achievable in the short term.
A Practical Route to AI Oversight
Research from the Future of Privacy Forum shows that this pattern is becoming more common across the continent. A 2026 report examining several African countries describes it as a defining shift in how digital policy is evolving.
Earlier laws often reflected Europe’s General Data Protection Regulation (GDPR). Newer updates are starting to reflect local realities, including how AI is already being used in areas like lending, identity verification, and fraud detection.
Experts say data protection rules alone cannot cover every aspect of digital governance, but they provide a starting point that governments can build on.
How the Strategy Works in Practice
Several countries, including Nigeria, Kenya, South Africa, and Botswana, are updating their data protection laws to address issues tied to AI systems. These updates focus on areas such as automated decision-making, algorithmic accountability, and how data moves across borders.
Angola offers one of the clearest examples. Instead of drafting a separate AI law, the country is revising its existing data protection framework to include rules on automated decisions and transparency. Individuals will have the right to question decisions made entirely by algorithms, especially when those decisions affect access to services like credit.
This mirrors elements seen in Europe’s AI Act, but the rules are embedded within a broader data protection system rather than standing alone.
Other countries are taking a more indirect route. Nigeria is exploring how its data protection framework can apply to social platforms and developers, while Kenya is strengthening obligations for companies that manage user data. These changes influence how AI systems operate, even when the laws do not explicitly mention AI.
Real-World Risks Driving Policy Changes
The push for regulation is not theoretical. Studies have already highlighted how AI systems can produce biased outcomes. A 2025 research paper examining credit-scoring tools across Nigeria, Kenya, and South Africa found consistent disadvantages for women-led businesses.
In one case, a digital lending platform in Nigeria approved fewer loans for women despite stronger repayment performance. Findings like these have raised concerns about fairness and accountability, pushing regulators to act.
For many governments, this makes data protection a practical way to address the impact of AI without needing entirely new legislation.
Enforcement Still Catching Up
While legal frameworks are expanding, enforcement remains uneven. Many early data protection laws introduced over the past decade were criticised for lacking clear definitions and strong oversight mechanisms.
Recent reforms aim to address those gaps. Botswana replaced its earlier law with a revised version in 2024 that strengthens regulatory independence and introduces stricter compliance requirements. Kenya has taken a gradual approach, refining its laws over time and considering new systems to handle disputes.
There are also signs that enforcement is becoming more active. In Nigeria, the National Data Protection Commission fined Multichoice ₦766 million in 2025 over unlawful data practices. In Kenya, the Office of the Data Protection Commissioner issued a multi-million shilling penalty against a school for mishandling images of minors.
Data Sovereignty Becomes a Central Concern
Beyond regulation, many African countries are also focused on who controls data in an AI-driven world. With global tech companies playing a major role in AI development, governments are working to ensure local data is governed in line with national interests.
This does not mean cutting off cross-border data flows. Instead, countries are identifying sensitive data that should remain within their borders while allowing other data to move more freely.
Algeria, for example, has introduced a classification system that separates different types of data and applies different rules to each. This allows the country to maintain control while still participating in global trade.
Balancing National and Regional Goals
Efforts are also underway to align policies across the continent. The African Union and the AfCFTA Secretariat are working on frameworks that encourage countries to adopt compatible rules.
The AU Data Policy Framework provides guidance on building systems that can work across borders, while the AfCFTA Digital Trade Protocol aims to support digital commerce within the region.
Even with these efforts, national priorities continue to shape how laws are implemented. This has resulted in a mix of regulations that may complicate cross-border operations for businesses.
What Comes Next for AI Regulation in Africa
Using data protection laws as a gateway to AI oversight has helped governments respond quickly, but it is not a permanent solution. Dedicated AI legislation is already emerging in some countries.
Kenya introduced an Artificial Intelligence Bill in early 2026, and similar discussions are taking place in South Africa. More countries are expected to follow with their own frameworks.
For now, data protection remains the main tool shaping how AI is governed across Africa. The current approach shows that policymakers are not simply adopting external models but are adapting regulation to fit local needs and priorities.