EU AI Act

EU AI Act Regulation 2024/1689

The European AI Act (Regulation 2024/1689) was officially published on July 12, 2024, in the Official Journal of the European Union. With its publication, the regulation becomes legally binding and marks a historic milestone — the world’s first comprehensive legal framework for artificial intelligence.

The EU AI Act aims to harmonize rules for the development, placement on the market, and use of AI systems across the EU. Its goal is to foster human-centric and trustworthy AI while ensuring a high level of protection for health, safety, and fundamental rights.

In other words, the Act sets clear boundaries for what is unacceptable, what is high-risk, and where transparency is required — creating both challenges and opportunities for AI developers and manufacturers.

Timelines:

The timeline below illustrates the key milestones for the implementation of the EU AI Act (Regulation 2024/1689). Following its publication on 12 July 2024, the regulation entered into force on 1 August 2024. From 2 February 2025, initial provisions such as definitions, AI literacy, and prohibitions against unacceptable AI practices apply. The main obligations, including requirements for high-risk and general-purpose AI systems, take effect between August 2025 and August 2027, with most operative rules applying from 2 August 2026. Public authorities and certain large-scale systems have extended transition periods until 2030, with full enforcement beginning on 31 December 2030.

With the key dates in mind, manufacturers can now start preparing their systems and documentation according to the roadmap.

How to Implement the EU AI Act – A Practical Guide

Implementing the EU AI Act can feel overwhelming at first glance. Between new obligations, documentation requirements, and uncertainty around risk levels, many manufacturers wonder where to even begin.

This step-by-step guide walks you through a practical implementation path, from scoping your system to ongoing monitoring and shows how to make compliance achievable without drowning in complexity.

1. Define your system, intended purpose and role

The first step is understanding what exactly you are putting on the market. In other words, you have to define the intended purpose of your AI system.

The intended purpose explains what the AI system does, for whom, and under which conditions. It should state the main function, target users, and use environment (e.g., ‘AI software assisting radiologists in detecting lung nodules on chest X-rays’). A precise intended purpose anchors risk classification, obligations, and conformity assessment under the EU AI Act.

Then identify your role under the EU AI Act — are you the provider, deployer, importer, distributor or authorized representative?

This initial scoping determines which parts of the regulation apply to you and which obligations you must fulfill. Many teams find it helpful to document this in a short “system profile” — one page that summarizes the purpose, users, and context. It becomes the anchor for all later documentation.

2. Determine the risk classification

Once the system is scoped, the next step is to classify its risk level. The EU AI Act defines four categories:

  • Minimal risk, such as spam filters or video games.
  • Limited risk, such as chatbots or AI-generated content that must be clearly disclosed as AI-based.
  • High risk, such as systems in healthcare, critical infrastructure, or employment.
  • Unacceptable risk, such as social scoring, manipulation or face recognition.

The illustration below shows in more detail some examples that can fall under each classification.

If your system is classified as high-risk, you will need to go through a conformity assessment and maintain a quality management system (QMS). The classification step is crucial. A wrong assumption here can lead to unnecessary effort or missed obligations.

3. Plan your compliance roadmap

With your risk level defined, it’s time to translate the regulation into an implementation plan.
List all relevant requirements from the Act and assign responsibilities — who owns data governance, risk management, testing, and documentation.
Break down the work into manageable timeframes (for example, 90 and 180 days) and document progress as you go.
A structured roadmap helps ensure nothing gets overlooked, especially as cross-functional teams (legal, data science, quality, IT) get involved.

Our EU AI Act Gap-assessment tool can help you in this step. You don’t have to start from scratch. This tool helps to identify your system’s risk level, flag compliance gaps, and generate an actionable roadmap.

4. Align your Quality Management System

If you already operate under frameworks like ISO 9001, ISO 13485, or ISO 27001, you are one step ahead.

The EU AI Act expects that AI providers have a QMS covering the entire lifecycle — from design and data management to post-market monitoring. This means you’ll need to integrate AI-specific processes, such as dataset documentation, model validation, and change control, into your existing QMS. The goal is to demonstrate control, consistency, and traceability — not just technical excellence.

5. Strengthen data and input governance

Article 10 of the AI Act focuses heavily on data governance. It requires that training, validation, and testing data be relevant, sufficiently representative, and, to the best extent possible, free of errors and complete. This ensures the system performs consistently and does not introduce bias or unintended discrimination. Manufacturers should therefore document data sources, selection criteria, and any cleaning or augmentation steps taken.

A growing topic here is input-level ambiguity — unclear or unstructured user input that leads to unpredictable results. While not yet formally regulated, it’s wise to treat it as part of your data governance and risk management approach.

6. Build a continuous risk management process

AI systems are dynamic — their behavior can evolve over time. The EU AI Act expects an ongoing risk management process, not just a one-time assessment. This includes identifying hazards, testing performance and robustness, defining acceptance criteria, and monitoring residual risks. Think of it as your AI “safety file”: a living document that grows with your system.

7. Ensure transparency and human oversight

Transparency and human oversight are central to trust in AI. Manufacturers must clearly communicate what the system does, its limitations, and when a human operator should intervene. This applies both to end users and to internal staff responsible for oversight. Practical measures include providing human-in-the-loop checkpoints, fallback procedures, and clear instructions for safe use.

8. Prepare your technical documentation

Before placing your system on the market, you must prepare comprehensive technical documentation. This includes a full system description, design choices, data and model documentation, test results, risk analysis, and a plan for post-market monitoring. The documentation should clearly link each regulatory requirement to concrete evidence — this “traceability” is key for conformity assessment and audits.

9. Establish post-market monitoring and incident handling

Compliance doesn’t end at market launch. Manufacturers must continuously monitor system performance, detect anomalies, and report serious incidents. Set up metrics and dashboards that track key indicators like drift, false positives, or unexpected behavior. Regular reviews and improvement cycles will keep your system aligned with the regulation and real-world conditions.

10. Control your suppliers and changes

AI systems rarely exist in isolation. You may rely on external datasets, APIs, or general-purpose models (GPAI). These suppliers must also meet certain obligations, and it’s your responsibility to ensure their documentation and contracts reflect that. Any significant change — in data, algorithms, or model behavior — should trigger a re-assessment of risks and, if necessary, an update to your technical documentation.

11. Audit and readiness check

Before formal submission or declaration, conduct an internal readiness audit. Sample evidence, interview responsible persons, and check that all requirements are traceable. This not only ensures regulatory compliance but also builds internal confidence for future AI releases.

Final thoughts:

Implementing the EU AI Act isn’t a box-ticking exercise; it’s a chance to build structure, transparency, and quality into your AI lifecycle. Start small, document clearly, and improve continuously — you’ll comply with the law and ship more trustworthy systems.

If you’d like to save weeks of effort and gain clarity on where you stand, explore our EU AI Act Gap-Assessment Tool — designed to make compliance practical and achievable for manufacturers of all sizes.

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.