In this article
The Data (Use and Access) Act 2025 is one of the most significant changes to UK data law since Brexit. It commenced on 5 February 2026, and it fundamentally alters how automated decision-making is regulated in this country. If your organisation uses AI in any capacity, this legislation affects you.
The problem is that most of the coverage so far has come from law firms writing for other lawyers. The explanations are dense, heavily caveated, and not particularly useful if you are a business leader trying to understand what you actually need to do differently. This guide is the plain-English version.
What is the DUA Act
The Data (Use and Access) Act 2025 is a piece of UK primary legislation that reforms several areas of data law. It covers smart data schemes, digital identity, and electronic communications. But the provisions that matter most for AI are the changes to the UK GDPR, specifically the rules around automated decision-making.
The Act is part of the UK government's broader effort to create a data protection framework that supports innovation while maintaining meaningful safeguards. It replaces certain provisions of the UK GDPR that were inherited from EU law and that, in the government's view, were holding back the responsible adoption of AI and data-driven technologies.
This is not forthcoming legislation. It is live. The new rules apply right now to every organisation processing personal data in the UK.
What changed on 5 February 2026
The headline change is the replacement of the old automated decision-making (ADM) framework under UK GDPR Article 22. Under the previous rules, there was a general prohibition on solely automated decisions that produced legal or similarly significant effects on individuals, with limited exceptions. The DUA Act removes that prohibition and replaces it with a permission model that includes specific safeguards.
In practical terms, the old framework said: you cannot make automated decisions about people unless you meet one of a narrow set of exceptions. The new framework says: you can make automated decisions about people, provided you comply with a set of requirements designed to protect their rights.
This is a meaningful shift. It moves the regulatory posture from restriction to enablement with accountability. For businesses deploying AI systems that make or support decisions about individuals, this changes the compliance landscape considerably.
Automated decision-making: old rules vs new
The old position (pre-5 February 2026)
Under the inherited UK GDPR Article 22, there was a right not to be subject to solely automated decision-making that produced legal or similarly significant effects. Organisations could only use such processing if it was necessary for a contract, authorised by law, or based on explicit consent. In practice, this created significant uncertainty. Many organisations avoided automated decision-making entirely, or built in token human oversight to argue they were not caught by Article 22.
The new position (post-5 February 2026)
The DUA Act replaces Article 22 with a new framework that permits automated decision-making subject to safeguards. The key requirements under the new rules are:
- Transparency. Individuals must be informed when a significant decision has been made or materially supported by automated processing. This notification must be clear and must explain the decision.
- Right to contest. Individuals have the right to challenge automated decisions and to request human review. Organisations must provide a meaningful mechanism for this, not simply a form that disappears into a queue.
- Right to obtain information. Individuals can request information about how the automated decision was made, including the logic and factors involved. This does not require full algorithmic transparency, but it does require a meaningful explanation.
- Safeguards for special category data. Where automated decisions involve special category data (health, ethnicity, political opinions, and so on), additional protections apply. Organisations must demonstrate that appropriate safeguards are in place.
Key insight: The shift from prohibition to permission does not mean anything goes. It means organisations now have a clear legal pathway for automated decision-making, but that pathway comes with specific obligations around transparency, contestability, and accountability.
What this means for AI deployment
For organisations using AI to make or support decisions about people, the DUA Act changes the conversation. Previously, the legal ambiguity around Article 22 meant many organisations either avoided AI-driven decisions or structured them in ways that were technically compliant but practically awkward. The new framework provides clarity.
However, clarity cuts both ways. Under the old rules, many organisations could argue that their AI systems did not trigger Article 22 because a human was nominally involved. Under the new framework, the safeguards apply more broadly. If AI is materially involved in a significant decision, the transparency and contestability requirements apply regardless of whether a human rubber-stamps the output.
Who is affected
Any organisation that uses AI to process personal data in ways that affect individuals. This includes but is not limited to:
- HR departments using AI for recruitment screening or performance assessment
- Financial services firms using AI for credit scoring or fraud detection
- Insurance companies using AI for risk assessment or claims processing
- Healthcare providers using AI for diagnostic support or treatment recommendations
- Any business using AI-powered customer service that makes decisions about complaints, refunds, or account actions
If your AI system is involved in decisions that have a meaningful impact on people, the new safeguards apply to you.
Data protection impact
The DUA Act does not replace the fundamental principles of UK GDPR. Lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability all remain in force. What the Act does is update the specific mechanism by which automated decision-making is regulated within that framework.
Organisations still need to conduct Data Protection Impact Assessments for AI systems that process personal data at scale or in ways that create risk for individuals. The ICO has indicated that it expects DPIAs to reflect the new DUA Act requirements, including explicit consideration of how transparency and contestability obligations will be met.
Record-keeping requirements are also affected. Under the new framework, organisations must be able to demonstrate that they notified individuals of automated decisions, that they provided mechanisms for contestability, and that they responded appropriately to challenges. This means audit trails are no longer optional for AI systems that make significant decisions. They are a regulatory requirement.
What businesses should do now
The DUA Act is already in force. Organisations that have not yet adapted their processes are already behind. Here are the practical steps to take.
1. Audit your automated decision-making
Identify every AI system or automated process that makes or materially supports decisions about individuals. Map what decisions are being made, what data is being used, and what the impact on individuals is. Many organisations will find they have more automated decision-making than they realised.
2. Update your transparency mechanisms
Review how you inform individuals about automated decisions. The new framework requires clear notification when a significant decision has been made using automated processing. Generic privacy notices are not sufficient. The notification must relate to the specific decision.
3. Build contestability processes
Establish a meaningful process for individuals to challenge automated decisions and request human review. This needs to be accessible, responsive, and genuinely capable of changing outcomes. A complaints form that nobody monitors does not meet the standard.
4. Update your DPIAs
Revise your Data Protection Impact Assessments to reflect the new DUA Act requirements. Pay particular attention to how your AI systems handle transparency, contestability, and special category data. If you have not conducted DPIAs for your AI systems, start now.
5. Strengthen your audit trails
Ensure your AI systems produce comprehensive audit logs that record what decisions were made, what data was used, what the reasoning was, and whether the individual was notified. These records are essential for demonstrating compliance under the new framework.
The DUA Act does not create a free-for-all for automated decisions. It creates a clear framework with specific obligations. Organisations that treat it as a loosening of the rules, rather than a clarification of them, are likely to find themselves on the wrong side of the ICO.
How Other Me aligns
The DUA Act's emphasis on audit trails, transparency, and accountability aligns directly with how Other Me's enterprise platform is architected. For organisations looking to deploy AI within the new framework, the compliance infrastructure is built in rather than bolted on.
Other Me's patent-pending SCRS (Secure Context Retrieval System) produces comprehensive audit trails for every AI interaction. Every query, every data access, every response is logged with full provenance. When the DUA Act requires you to demonstrate what data was used in an automated decision and how it was reached, the records are already there.
Consent tracking is embedded at platform level. SCRS enforces scope-constrained retrieval, meaning AI systems can only access data that falls within defined boundaries. This is not a configuration option that someone might forget to enable. It is how the system operates by default. Data that should not be part of a decision is never retrieved in the first place.
For organisations subject to multiple regulatory frameworks, Other Me's enterprise platform is built around compliance with GDPR, SOC 2, and ISO 27001 standards. The DUA Act does not exist in isolation, and neither does the compliance infrastructure. Role-based access controls, data residency options, and the Dual-Gate architecture that prevents unauthorised data access all contribute to a governance posture that meets the new requirements without requiring separate tooling or manual processes.
Built for the new framework: Other Me's SCRS audit trails, consent tracking, scope-constrained retrieval, and multi-framework compliance (GDPR, SOC 2, ISO 27001) are designed for a regulatory environment where transparency and accountability are not optional. Learn more about Other Me Enterprise.
The DUA Act represents a maturing of UK data law. It acknowledges that automated decision-making is a reality and provides a framework for doing it responsibly. Organisations that invest in the governance infrastructure to meet these requirements will find that the new rules are workable. Those that treat compliance as an afterthought will find that the ICO's enforcement powers remain as robust as ever.
Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS Dual-Gate architecture is the subject of UK Patent Application No. 2602911.6.