Even Bigger Than GDPR: The EU AI Act of 2026 & The Return of Human Accountability
The EU AI Act, taking effect in 2026, marks a historic shift in global regulation, accountability, and leadership responsibility. Bigger than GDPR, it forces every organization using AI to prove human oversight, ethical governance, and traceable decision-making. In this article, I unpack what the Act means for business leaders, HR, compliance, and customer-facing teams; explore real cases like UnitedHealthcare’s AI controversy; and reveal the 7 essential habits leaders must build to create a culture of human accountability and readiness. Learn how to protect your organization, and your career, in the age of intelligent systems.
Omar Berrada
10/28/20258 min read


We are in April 2026. Sofia, HR Director in Amsterdam, receives an official letter. Subject: “EU AI Office. Notice of Non-Compliance.”
Her department’s AI hiring system is found to disadvantage older candidates.
She didn’t design it, nor did she code it, but she approved its use without audit trails, bias tests, or oversight documentation.
Now, her company faces €20M or 4% of global turnover in fines, potentially more if the system is ruled “prohibited use.”
And Sofia? Well, she’s personally accountable.
Under the EU AI Act, ignorance is no longer a defense.
Accountability has officially returned, and it has a name and a face again.
“If GDPR was about data, this law is about decisions, and it makes people accountable for both.”
The EU AI Act is now in full effect.
And for the first time in decades, it’s not the algorithm that is on trial, it’s the human who approved it.
The EU AI Act in plain language
Approved in 2024, phased in through 2025, and fully enforceable in 2026, the EU AI Act is the world’s first comprehensive legal framework for artificial intelligence.
Its purpose is straightforward: to make sure that as AI becomes embedded in every function of work and life, it remains safe, transparent, and accountable.
The law classifies AI systems by risk level: unacceptable, high, limited, or minimal.
Unacceptable systems (like real-time facial scoring or manipulative behavioral models) are banned outright.
High-risk systems: those used in hiring, lending, healthcare, or critical infrastructure are allowed, but under strict human oversight.
They must be traceable, explainable, and constantly monitored.
The penalties for non-compliance are not symbolic.
Organizations face fines of up to €35 million or 7% of global annual turnover, whichever is higher.
And just like GDPR, the law’s reach extends beyond Europe: if your company sells, hires, or processes data involving EU citizens, you fall under its jurisdiction, whether you’re based in Amsterdam or Austin.
But this time, the implications run deeper than financial or technical compliance.
The EU AI Act therefore doesn’t just regulate systemsbut rather redefines human accountability.
Why this matters now: accountability is personal again
For years, AI adoption has outpaced human oversight.
We’ve delegated data, logic, and decisions to algorithms that most leaders barely understand.
If something went wrong, the blame was conveniently abstract: “the model failed,” “the vendor’s data was flawed,” “the system glitched.”
That era is ending.
Under the new law, accountability can no longer be outsourced. Not to Legal, not to IT, not to “the system.”
Every department that uses AI to make decisions, from HR to Finance to Marketing, now carries direct human responsibility for how that AI behaves.
If an AI tool discriminates, manipulates, or leaks personal data, regulators will not be asking how the system was designed.
They’ll ask whether a human leader exercised proper oversight.
And if the answer is no, the consequences fall squarely on that leader.
This marks a profound cultural shift: responsible AI is now a legal expectation of leadership.
Leadership in the age of intelligent systems
What this means in practice is that leadership decisions now have legal weight.
When a manager approves an AI-generated forecast, a marketing campaign, or a hiring recommendation, they’re not just approving a piece of work, they’re certifying a decision-making process. And that process can be audited.
This is the new leadership landscape:
Every AI-assisted output must be traceable.
Every decision must be explainable.
Every assumption must be reviewable.
In short, leadership has become the new compliance mechanism and judgment is now part of governance.
The global ripple effect
It would be a mistake to think this is just a European issue.
The same accountability wave is already forming across North America.
In Canada, the Artificial Intelligence and Data Act (AIDA) is expected to take effect by 2026, mirroring the EU’s high-risk system framework.
It will require companies to assess the impact of AI on fairness, bias, and privacy, and to document how those risks are mitigated.
In the United States, the AI Bill of Rights and President Biden’s Executive Order on Safe, Secure, and Trustworthy AI already emphasize transparency, explainability, and fairness as foundational standards.
Federal agencies are being directed to issue guidance that, in practice, mirrors much of the EU’s philosophy.
The message is clear: AI accountability is going global.
Multinational firms will standardize compliance to the strictest region, and that region is Europe.
So even if you’ve never done business with the EU, your organization will eventually operate by its rules.
The return of human accountability
For decades, leadership evolved toward efficiency.
We measured performance by speed, scale, and optimization. But as AI automates execution, the true differentiator is who can remain accountable when systems make decisions at scale, not who can deliver faster.
The EU AI Act reintroduces a principle many had quietly forgotten: technology does not absolve judgment; it amplifies its consequences. Especially when we make it a habit of outsourcing our thinking and decision making to AI systems that still make so many mistakes and “hallucinate” on a regular basis.
This is already a reality that many chose to ignore: when a machine executes a flawed command, the failure is still human. The law simply makes that much more explicit now .
These measures are about protecting trust, and not about punishing progress or resisting it.
Because when trust collapses, in hiring, in financial modeling, or in public safety, no technology can replace it.
Role-specific risks and implications
Every function that touches data or decision-making will feel this law differently, but the thread is the same: explainability, documentation, and ethics.
HR:
Recruitment algorithms and performance tools must demonstrate fairness and privacy compliance. If bias slips through, leaders will need to prove they validated the data and method, not just the result. Even when a third party provider or developer was in charge of managing it, the responsibility still lies within the organization using it.
Marketing:
Generative AI tools must not manipulate audiences or misuse personal data. If customer information leaks into model training, the damage is reputational and regulatory.
Sales & Client-Facing Roles:
These functions often handle sensitive data such as client profiles, preferences, or financial details, that can easily be fed into AI platforms for convenience. Doing so without consent or protection may constitute a compliance breach.
Finance & Operations:
Automated forecasting, credit scoring, or optimization models must now have human checkpoints. If an algorithm influences investment or supply decisions, the responsible leader must be able to explain how.
No department is immune. If you use AI, you are now part of the accountability chain.
Let’s take the latest example that created big waves on
Case Study: UnitedHealthcare & When Automation Crossed a Line
In 2023, UnitedHealthcare and its subsidiary, NaviHealth, became the center of a growing controversy over the use of artificial intelligence in medical decision-making.
Their AI tool, nH Predict, was designed to streamline post-hospital care planning for elderly patients under Medicare Advantage programs.
Instead, it sparked allegations that would redefine how accountability in automated systems is understood.
The lawsuit claimed that nH Predict overrode doctors’ clinical recommendations, prematurely ending coverage for vulnerable patients who still needed care.
Internal documents cited in the case suggested that the company knew of accuracy issues with the model allegedly producing up to 90% error rates, most later overturned on appeal.
A 2024 U.S. Senate report later found that AI-assisted denials for post-acute care had more than doubled between 2020 and 2022.
In February 2025, a federal judge allowed a class-action lawsuit to proceed, focusing on breach of contract and lack of good faith.
The case highlighted not only legal exposure but also the ethical vacuum that can emerge when algorithmic efficiency overrides human judgment.
Then, tragedy struck.
In April 2025, UnitedHealth Group CEO Brian Thompson was shot and killed outside his Minnesota home.
While authorities have not confirmed a direct connection between the shooting and the AI-related lawsuit, the incident reignited a public conversation about trust, accountability, and the human toll of corporate decisions that feel dehumanizing.
The parallel between the lawsuit and the tragedy was impossible to ignore: when people feel dismissed, powerless, and wronged by systems they cannot question, consequences can spill far beyond spreadsheets and audits.
This case serves as a warning to every executive and manager adopting AI tools today, you cannot automate empathy or accountability.
Because when trust collapses, no algorithm can repair what’s been lost.
The positive side and what this law gets right
It’s easy to see regulation as friction, but this one forces the maturity that technology demands.
The EU AI Act compels organizations to rebuild trust, transparency, and traceability, not as slogans, but as operational standards.
It also corrects a cultural imbalance. For too long, speed has been valued over reflection. Now, compliance requires both.
The Act comes in to refine automation and to make it safer and more ethical.
It reminds us that progress without accountability is a major risk disguised as efficiency.
The Human Edge in an age of oversight
This new era indeed needs new rules; and they are here as guardrails that will amplify the benefits of these new capacities.
To lead responsibly in an AI-driven world, professionals need six enduring capabilities and they are what I call The Human Edge skills:
Analytical Thinking: Seeing patterns, bias, and consequences that machines miss.
Creativity: Framing problems AI can’t define.
Emotional Intelligence: Building trust and understanding across hybrid teams.
Influence & Mentorship: Guiding others to think critically, not reactively.
Adaptability: Learning faster than technology evolves.
Ethical Judgment: Balancing efficiency with consequence, the essence of responsible leadership.
The irony is that the most advanced law on AI doesn’t celebrate technology; it elevates humanity.
The very traits that once distinguished great leaders are becoming the foundation of compliance itself.
Seven habits to build a culture of accountability before you are forced to
To prepare your organization before the law enforces maturity, start here:
Mask sensitive data by default.
No AI tool should ever see information you wouldn’t disclose publicly.Keep outputs in approved storage.
Avoid shadow use of generative tools that leak data outside your control.Add a two-minute validation check.
Before approving any AI-generated output, verify sources, numbers, and assumptions.Maintain an AI use log.
Record which tools are used, who uses them, and for what purpose.Run monthly red-team drills.
Test for bias, hallucination, and data leakage. Treat it like fire safety for intelligence.Train managers on explainability.
Equip them to interpret AI logic and know when to override the system.Build your Human Edge.
Strengthen the analytical, creative, emotional, adaptive, influential, and ethical capabilities that make accountability natural, not forced.
I go into very much detail and guidance on how to build these elite human capabilities in my free ebook “How to Stay Relevant When AI Is Changing Everything”
These habits form the backbone of what regulators call traceability and what leaders should recognize as trust.
The rebirth of accountable leadership
The EU AI Act is redefining leadership in the intelligent era and shouldn’t be perceived as a resistance to, or fear of technology.
It closes the gap between what technology can do and what humans should do.
We are entering a time when leadership will be measured less by authority and more by how responsibly we guide intelligence, human and artificial alike.
For some, this will feel like a burden.
For others, it will be a moment of clarity: the return of human accountability as the highest form of leadership.
And in that shift lies an opportunity: to rebuild trust, redefine relevance, and lead progress with integrity instead of fear.
Because the future won’t remember who adopted AI the fastest.
It will remember who used it responsibly enough to keep humanity at the center of progress.
Download the free ebook:
“How to Stay Relevant When AI Is Changing Everything” HERE, and learn the six Human Edge capabilities that turn responsibility into resilience.
Thank you for reading and until next time, stay focused and irreplaceable my friends.


