Three things to do in the second half of 2025
1. Map your stack against Annex III before anything else
Annex III is the list that defines what counts as high-risk. It covers eight domains: employment and workforce management, access to credit, education, essential private and public services, law enforcement, migration and border control, justice, and critical infrastructure.
Your first job is not to read the full Act. It is to sit down with whoever owns your AI systems and go through that list together. Be specific. "We use an AI-assisted ATS" is not enough. Which decisions does it influence? Does it rank, score, or filter candidates without a human reviewing the underlying logic? That question matters.
For Dutch SMEs, the most common exposure points are employment tools (ATS platforms, performance scoring), credit or insurance flows, and customer segmentation used in financial products. Start there. A one-page inventory mapping each system to the Annex III categories is a useful output. It does not need to be a legal document. It needs to be accurate.
The gotcha here is third-party tools. If you are using an off-the-shelf SaaS product that does the scoring or ranking, you are still in scope as the deployer. The provider's compliance does not automatically become yours.
2. Conformity assessment, technical documentation, and CE marking for high-risk systems
Once you know which systems are in scope, the Act requires three concrete things before deployment:
Conformity assessment. For most high-risk systems outside a handful of critical sectors, you can self-assess. That means working through a structured process to verify that your system meets the Act's requirements on transparency, data governance, accuracy, and human oversight. It is not a checkbox. It takes time and internal coordination.
Technical documentation. The Act specifies what this must contain: the system's intended purpose, its performance metrics, the training data used, known limitations, and how human oversight is implemented. This documentation has to be maintained and kept up to date. If your system changes, the documentation changes.
CE marking. High-risk AI systems placed on the EU market need CE marking to confirm conformity. For self-assessed systems, you prepare a declaration of conformity and affix the marking. For systems in certain sensitive categories, a notified body has to be involved.
The gotcha. documentation written after the fact, just before an inspection, is obvious to any assessor and weak in any dispute. Write it as you build or configure, not after.
3. Appoint a governance role, not a compliance form
This is where most companies get it wrong. They produce a policy document, file it somewhere, and call it governance. That is not governance. That is paperwork.
The EU AI Act expects ongoing human oversight of high-risk systems. That means someone in your organisation has a named responsibility to monitor system behaviour, review decisions that affect individuals, flag anomalies, and escalate when something looks wrong. For a large enterprise, that might be a dedicated AI officer. For a Dutch SME or boutique product team, it is more likely an existing role with a defined scope extension.
What matters is that the responsibility is real, the person knows they hold it, and there is a process for what happens when something goes off. A form does not do that. A person with a mandate does.
The gotcha. do not make this purely a legal or compliance function. The person who understands how the system behaves needs to be in the loop. That is often a product manager or a data lead, not a lawyer.