HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
What the EU AI Act means for your product stack in 2026
Technology6 min read

What the EU AI Act means for your product stack in 2026

April 29, 2026

On 2 August 2026, the EU AI Act starts having teeth. That is the date when obligations for high-risk AI systems become enforceable across the EU. For most Dutch founders and product teams, the response so far has been some version of "we'll deal with that later."

Later is now six months away.

The uncomfortable part is not the deadline itself. It is that a meaningful number of companies using AI for HR screening, credit decisioning, or customer segmentation do not realise those systems qualify as high-risk under Annex III. They assume high-risk means robots performing surgery or autonomous vehicles. It does not. It means your recruitment tool that ranks CVs. It means the credit flow in your fintech stack that approves or declines applications. It means the scoring model that classifies customers for offers.

If you are running any of those systems, or building for clients who do, the obligations apply to you.

What the Digital Omnibus changes, and what it does not

There is a legislative rider worth knowing about. The Digital Omnibus proposal, currently moving through EU process, includes provisions that could delay or soften certain AI Act obligations. Some product teams are banking on that. That is a mistake.

Even if the Digital Omnibus passes in a form that pushes timelines, it will not arrive in time for you to plan around it with confidence. The parliamentary timeline is genuinely uncertain. Betting your compliance posture on a legislative outcome that has not been decided is not a strategy. It is a deferral dressed up as one.

Plan for 2 August 2026 as a hard date. If the Omnibus gives you extra runway later, that becomes a bonus. If it does not, you are ready.

Most teams are not ignoring compliance because they are reckless. They are ignoring it because nobody has translated the regulation into their actual stack. That is the work.

Max Pinas, founder, Studio Hyra

Three things to do in the second half of 2025

1. Map your stack against Annex III before anything else

Annex III is the list that defines what counts as high-risk. It covers eight domains: employment and workforce management, access to credit, education, essential private and public services, law enforcement, migration and border control, justice, and critical infrastructure.

Your first job is not to read the full Act. It is to sit down with whoever owns your AI systems and go through that list together. Be specific. "We use an AI-assisted ATS" is not enough. Which decisions does it influence? Does it rank, score, or filter candidates without a human reviewing the underlying logic? That question matters.

For Dutch SMEs, the most common exposure points are employment tools (ATS platforms, performance scoring), credit or insurance flows, and customer segmentation used in financial products. Start there. A one-page inventory mapping each system to the Annex III categories is a useful output. It does not need to be a legal document. It needs to be accurate.

The gotcha here is third-party tools. If you are using an off-the-shelf SaaS product that does the scoring or ranking, you are still in scope as the deployer. The provider's compliance does not automatically become yours.

2. Conformity assessment, technical documentation, and CE marking for high-risk systems

Once you know which systems are in scope, the Act requires three concrete things before deployment:

Conformity assessment. For most high-risk systems outside a handful of critical sectors, you can self-assess. That means working through a structured process to verify that your system meets the Act's requirements on transparency, data governance, accuracy, and human oversight. It is not a checkbox. It takes time and internal coordination.

Technical documentation. The Act specifies what this must contain: the system's intended purpose, its performance metrics, the training data used, known limitations, and how human oversight is implemented. This documentation has to be maintained and kept up to date. If your system changes, the documentation changes.

CE marking. High-risk AI systems placed on the EU market need CE marking to confirm conformity. For self-assessed systems, you prepare a declaration of conformity and affix the marking. For systems in certain sensitive categories, a notified body has to be involved.

The gotcha. documentation written after the fact, just before an inspection, is obvious to any assessor and weak in any dispute. Write it as you build or configure, not after.

3. Appoint a governance role, not a compliance form

This is where most companies get it wrong. They produce a policy document, file it somewhere, and call it governance. That is not governance. That is paperwork.

The EU AI Act expects ongoing human oversight of high-risk systems. That means someone in your organisation has a named responsibility to monitor system behaviour, review decisions that affect individuals, flag anomalies, and escalate when something looks wrong. For a large enterprise, that might be a dedicated AI officer. For a Dutch SME or boutique product team, it is more likely an existing role with a defined scope extension.

What matters is that the responsibility is real, the person knows they hold it, and there is a process for what happens when something goes off. A form does not do that. A person with a mandate does.

The gotcha. do not make this purely a legal or compliance function. The person who understands how the system behaves needs to be in the loop. That is often a product manager or a data lead, not a lawyer.

A practical checklist for the next 90 days

This is not a full compliance programme. It is the minimum useful starting point for a founder or product lead who needs to move without hiring an army of consultants.

  • Annex III audit. List every AI system you operate or deploy. Map each one to the eight Annex III domains. Mark anything that could qualify as high-risk.
  • Deployer vs. provider clarity. For each tool, establish whether you are the provider (you built or trained it) or the deployer (you use someone else's system). Your obligations differ.
  • Data governance check. High-risk systems require documented data governance. For each in-scope system, can you describe what data was used, where it came from, and how bias was assessed? If not, start that conversation with your vendor or data team.
  • Human oversight design. For each high-risk system, write one paragraph describing how a human can intervene, override, or halt the system. If you cannot write that paragraph, the oversight is not real.
  • Governance owner. Name the person. Write it down. Tell them.
  • Documentation start date. Pick a date in the next two weeks and begin the technical documentation for your highest-risk system. Do not wait until it feels complete to start.

None of this requires a legal retainer on day one. It requires about two focused working days and the right people in the room.

The companies that will struggle in August 2026 are not the ones who tried and fell short. They are the ones who assumed it would not apply to them.

Max Pinas, founder, Studio Hyra

Where Studio Hyra fits in

We are not a law firm. We do not file your conformity assessments or provide legal sign-off. What we do is help product and design teams understand which of their AI systems carry real risk, structure the governance around those systems in a way that is operationally realistic, and produce the documentation that sits between the legal requirement and the actual product.

For Dutch SMEs and boutique product teams, that gap is often the hardest part. The regulation was written for large organisations with dedicated compliance functions. Translating it into something a team of ten can actually act on is a different skill. That is where we work.

If you want to talk through your stack and figure out where you actually stand, that conversation takes an hour. Start there.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

From AI pilot to AI in production

Most companies run pilots. Few ship agents that hold up. Here is what separates a proof of concept from a production system worth measuring.

Apr 29, 2026
Technology6 min read

You don't need more traffic. You need to be the answer.

Google AI Overviews and ChatGPT are replacing click-throughs. Here is how Studio Hyra thinks about being cited in AI-generated answers, and why strong content i

Apr 29, 2026