EU eases AI rules: Digital Omnibus reshapes the AI Act and GDPR
The European Commission has unveiled a package of changes called Digital Omnibus that reshapes how the EU regulates artificial intelligence, privacy and digital services. The key points:
- parts of the AI Act dealing with “high-risk” AI systems are pushed back until the end of 2027;
- some provisions of the GDPR are relaxed, especially around using data to train AI models;
- rules on cookies and cyber-incident reporting are simplified.
Business groups welcome the package as a much-needed brake on over-regulation and a chance for Europe to catch up with the US and China in the AI race. Privacy advocates, on the other hand, describe it as a “major rollback” that gifts concessions to Big Tech.

What is the Digital Omnibus and what changes in the AI Act?
Digital Omnibus is an attempt to streamline and “lighten” a whole cluster of existing EU digital rules. The most controversial part touches the AI Act, the world’s first major law on artificial intelligence.
Until now, the plan was for strict rules on high-risk AI systems (for example facial recognition in public spaces, medical AI, credit scoring, hiring tools, policing systems) to start applying in 2026. The Commission now proposes to move that deadline to December 2027.
In practice this means:
- more time for companies to adapt their AI products (documentation, risk assessments, testing);
- less risk of fines in the short term;
- but also a longer window in which sensitive AI applications operate under older or less precise rules.
For European AI startups and large players, this is short-term relief – two more years with less regulatory pressure. For users and digital-rights groups, it is a sign that the EU is less aggressive than before in protecting citizens from potentially harmful AI systems.
Softer GDPR: more data for training, fewer cookie pop-ups
The second major pillar of the package concerns privacy and data:
- clearer permission to use personal data for training AI models under a company’s “legitimate interest”, without having to ask for explicit consent for each individual use case;
- an attempt to reduce cookie fatigue – endless consent banners – by grouping some tracking technologies and simplifying consent flows;
- more centralised cyber-incident reporting, so companies do not have to deal with several national authorities at once.
The Commission argues that this does not lower the level of protection, but rather “clarifies and simplifies” enforcement and could save billions of euros in administrative costs over the coming years.
Critics warn that:
- it will become much easier to use massive datasets about EU citizens to train AI models without people really knowing or understanding how their data is used;
- the line between “legitimate processing” and profiling users for commercial or political purposes becomes blurry;
- individual users will find it even harder to track where their data goes and who is using it.
Business applauds, activists warn of a major retreat
European business associations describe Digital Omnibus as “a step in the right direction”. Their main arguments:
- European firms operate under heavier regulatory burdens than competitors in the US and China;
- overlapping rules (AI Act, GDPR, Data Act, e-Privacy, NIS2…) slow down innovation and push startups to relocate to London or Silicon Valley;
- clearer and lighter rules should make it easier to invest in European AI and data-center infrastructure.
Civil-society organisations and digital-rights groups strongly disagree. For them, the package looks like:
- a significant rollback of digital rights,
- a concession to US tech giants that have lobbied for months against strict AI Act enforcement,
- a precedent that could encourage further watering-down of digital laws whenever industry complains loudly enough.
Politically, the proposal is likely to trigger heated debates in the European Parliament and among member states, as Digital Omnibus still has to go through the full legislative process.
What does this mean for AI startups and users in the United States?
Even though the EU is making its rules more flexible, it is still far ahead of the United States in terms of comprehensive AI legislation. The contrast matters for both American companies and users.
For AI startups and tech companies in the US:
- The EU remains the only large market with a detailed horizontal AI law. Anyone who wants to sell high-risk AI systems in Europe will eventually have to meet AI Act requirements, even if enforcement is delayed to 2027.
- Many US companies already treat the EU as the “strictest baseline” and then reuse the same standards globally. If Digital Omnibus softens some obligations, it could slightly reduce compliance costs, but the overall direction – transparency, risk assessments, documentation – remains in place.
- Compared with the EU, the US still relies mostly on a patchwork of:
- sector-specific rules (healthcare, finance, employment),
- state-level privacy laws (CCPA/CPRA in California, Colorado, Connecticut, Virginia…),
- and the 2023 AI Executive Order, which sets federal guidelines but is not a full-blown AI statute.
For American users:
- In practice, they may see stricter protections when using EU-facing services (for example when a product has different settings or disclosures for EU users), and looser ones at home, where no single federal AI law exists yet.
- At the same time, competition between regulatory models is heating up:
- the EU promotes a “fundamental-rights-first” approach,
- the US focuses on innovation, voluntary commitments and targeted enforcement by agencies like the FTC,
- China is building its own state-centric AI rulebook.
For global AI platforms, the safest route is often to design products that satisfy the toughest requirements in all three regions. That means that even if you are based in the US and never set foot in Europe, EU rules will still influence what your AI apps can and cannot do.
Conclusion
Digital Omnibus shows how uncomfortable the EU’s position has become: it wants to remain a global leader in privacy and digital rights, but does not want to miss the economic wave of artificial intelligence.
For the AI industry, this is mostly good news in the short term – more time, less pressure, more data. For activists and privacy-minded users, it is a warning sign that even the strictest regulation can be softened when the political and economic stakes are high enough.
For American readers, the main takeaway is that the regulatory race is just beginning. The US still lacks a single, comprehensive AI law, but EU decisions will shape the behaviour of global platforms you use every day. Understanding how Brussels rewrites AI rules is no longer just a European issue – it is part of your everyday digital life as well.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. For concrete cases involving AI systems and data processing, consult a qualified lawyer or compliance specialist.






