top of page

The EU AI Act in a Nutshell: What It Means for Your Organisation

  • Writer: Sebastian Stoenescu
    Sebastian Stoenescu
  • Nov 20, 2025
  • 5 min read

The EU’s Artificial Intelligence Act (“AI Act”) is the first comprehensive legal framework for AI globally. It sets out rules for how AI may be developed and used across the EU by businesses, public bodies, and other organisations, with the aim of protecting health, safety, and fundamental rights while still enabling innovation.

The regulation will be phased in from 2025 onwards, so now is the time for organisations to understand their current position and begin preparing.





1. A risk‑based approach to regulating AI


The AI Act does not treat all AI the same. Instead, it introduces a risk‑based framework: the higher the risk to individuals and society, the stricter the obligations.

The guide identifies five main categories:


  1. Prohibited AI practices Certain AI uses pose an “unacceptable risk” and are banned outright, including, for example:

    • AI that manipulates human behaviour in a way that can significantly harm people;

    • AI that exploits vulnerabilities of children or persons in vulnerable situations;

    • Social scoring systems that rate people based on behaviour or personality;

    • Specific uses of biometric systems, such as emotion recognition in the workplace and education, and untargeted scraping of facial images to build databases.

These prohibitions apply from February 2025 to both those who build the systems and those who use them.


  1. High‑risk AI systems These are systems that can significantly affect people’s safety or fundamental rights. Two main groups fall under this label:

    • AI linked to existing regulated “high‑risk” products, such as specific medical devices, lifts, machinery, or AI used as a safety component of such a product covered by the Union harmonisation legislation.

    • AI used in specified “high‑risk application areas”, including:

      • biometrics (e.g. certain remote biometric identification systems);

      • AI safety components in critical infrastructure (e.g., managing electricity or water supply);

      • education and vocational training (e.g. systems deciding access to education or grading);

      • employment and HR (e.g. recruitment tools, worker monitoring);

      • access to essential private or public services (e.g. credit scoring, emergency response);

      • law enforcement;

      • migration, asylum and border control;

      • justice and democratic processes (e.g. tools assisting judges, systems influencing voting behaviour).


  2. Obligations for high-risk systems take effect primarily from August 2026, with some product-related rules following in August 2027.


  3. General‑purpose AI (GPAI) models A general purpose AI model is one that can perform a wide range of tasks (for example, large language models) and can be integrated into various AI systems. Providers of such models must meet specific documentation, information‑sharing, and copyright‑related obligations from August 2025. Stricter rules apply to the largest models, which pose a “systemic risk”.


  4. Generative AI and chatbots Systems that generate text, images, audio or video, and interactive systems such as chatbots, must meet transparency requirements. Users should be able to discern whether they are interacting with AI or consuming AI-generated or manipulated content. These obligations apply from August 2026.


  5. Other AI systems AI that does not fall into the categories above is not subject to specific AI Act obligations. However, if you repurpose a AI system for a high‑risk use (for example, using a general chatbot to make HR decisions), it can become a high‑risk AI system, and you may be treated as the provider of that system.


2. When is a system considered “AI”?


The AI Act uses a functional definition of AI. In short, an AI system is a machine‑based system that:

  • operates with at least some level of autonomy;

  • may adapt after deployment; and

  • uses input data to infer how to generate outputs (such as predictions, recommendations, content or decisions) that influence physical or virtual environments.

This captures both machine-learning systems and specific knowledge or rule-based systems, but generally excludes purely fixed, manually coded software that cannot adapt and has no real autonomy.


3. Provider or deployer – which are you?


Your obligations depend heavily on your role in the AI value chain:

  • A provider is the entity that develops or commissions an AI system or model and places it on the market or puts it into service under its own name or brand.

  • A deployer is the organisation that uses an AI system under its own authority (for professional, not purely personal use).

Typically, vendors and developers will be providers, while customer organisations using AI in their operations are deployers. However, a deployer can become a provider – for example, if it substantially modifies a high‑risk system or uses an existing system for a new high‑risk purpose.


4. Key obligations for high‑risk AI


High‑risk AI systems carry the most stringent requirements. Providers must, among other things:

  • implement a risk management system, assessing and mitigating foreseeable risks to health, safety, and fundamental rights;

  • ensure appropriate data governance, including checking training and testing data for quality, representativeness, and potential bias;

  • prepare detailed technical documentation and maintain automatic logging;

  • build in transparency and human oversight, so that users understand how to use the system and can effectively monitor, override, or stop it;

  • meet standards of accuracy, robustness, and cybersecurity;

  • operate a quality management system and ongoing post‑market monitoring;

  • register high‑risk systems in an EU database and undergo a conformity assessment (self‑assessment or by a third party, depending on the system).


Deployers of high‑risk AI must also:

  • use the system in line with the provider’s instructions;

  • assign trained staff to exercise human oversight;

  • ensure that input data is relevant and sufficiently representative where they control it;

  • monitor operation and suspend use if they suspect non‑compliance;

  • keep logs under their control;

  • inform affected individuals when high‑risk AI is used to make decisions about them; and

  • notify providers and authorities of serious incidents.


Public authorities (and some private entities providing public services or using certain credit and insurance systems) must also carry out a fundamental rights impact assessment before using high-risk AI in specific contexts.


5. Transparency for generative AI and chatbots


To avoid misleading users, the AI Act requires that:

  • Providers of chatbots must ensure users are informed that they are interacting with an AI system.

  • Providers of generative AI must mark AI‑generated or manipulated content in a machine‑readable way.

  • Deployers of generative AI must clearly indicate that audio, image or video content is artificially generated or altered (with some flexibility for creative and artistic uses).

  • Special transparency rules apply to AI‑generated text used to inform the public on matters of public interest.


6. What should organisations do now?


The guide proposes four practical questions as a starting point for compliance:


  1. AI. Does it qualify as an AI system under the AI Act definition?

  2. Risk. Is the system (or its intended use) covered by one of the risk categories?

  3. Role. Are you acting as provider, deployer, or both?

  4. Obligations. Which specific duties flow from that risk level and role?


In practice, that means you should:

  • map your current and planned AI use cases;

  • identify high‑risk or potentially prohibited applications;

  • review contracts with vendors and partners to allocate responsibilities;

  • start building governance processes (risk assessments, documentation, incident reporting, oversight, training); and

  • integrate AI Act considerations into procurement and product development processes.


The AI Act will reshape the regulatory landscape for any organisation developing or using AI in the EU. Early preparation will not only reduce legal and reputational risk, but can also become a competitive advantage as customers and regulators increasingly look for trustworthy AI.

 
 
bottom of page