HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
Writing the brief with the thing that does the work
Technology6 min read

Writing the brief with the thing that does the work

May 9, 2026

The first thing I ask a coding agent to do is not write code. I ask it to write a plan.

That single habit changed how we run AI-assisted projects at Studio Hyra more than any model upgrade, IDE plugin, or prompt trick. It sounds obvious when I say it out loud. In practice, almost nobody does it.

What follows is a practitioner's account of how we structure agent-assisted coding around discrete, bounded phases. It is not a theoretical framework. It is the process we actually use, with the mistakes we made getting here included.

Three perfectly stacked cubes, one silver, two yellow, on a grid surface.

The core problem with just prompting

Coding agents are genuinely capable. Tools like Cursor, GitHub Copilot Workspace, and Devin can hold large amounts of context and produce working code at a pace no individual developer matches on a good day. That capability is real.

The failure mode is equally real. When you drop a vague objective into an agent and let it run, you get vague output delivered confidently. The agent makes architectural decisions you did not sanction. It introduces dependencies you did not choose. It solves the problem it inferred, not always the problem you had.

The underlying issue is not the model. It is scope. Agents, like junior developers, will fill an undefined brief with their own assumptions. The difference is that a junior developer will usually ask a clarifying question. An agent will not. It will just build.

This is the gap that a statement of work closes.

The agent makes architectural decisions you did not sanction. It solves the problem it inferred, not the problem you had. The difference is not the model. It is scope.

Max Pinas, founder, Studio Hyra

Writing the SoW with the agent, not before it

A statement of work in traditional delivery is a document the agency writes before the engagement starts. It defines deliverables, exclusions, dependencies, and acceptance criteria. It protects both parties.

In agent-assisted coding, we use the same concept, but we write it with the agent as the first task. Not as a formality. As a diagnostic.

Here is what that looks like in practice.

We open a fresh context and give the agent a single paragraph describing the feature or module we want built. Then we ask it to produce a short statement of work: what it proposes to build, what it is explicitly not going to do, what it needs from us before it can begin, and what done looks like.

The output is almost never right on the first pass. That is the point. The gaps and wrong assumptions in the agent's SoW tell us exactly where our brief was underspecified. We fix the brief, run the SoW again, and iterate until the plan reflects the real work. This usually takes two or three rounds and maybe twenty minutes.

What we are doing is front-loading the ambiguity into a cheap, low-stakes conversation rather than discovering it mid-build when the cost of course-correcting is high.

The gotcha here. it is tempting to accept a SoW that sounds reasonable even when it is vague. Push for specifics. If the agent describes a deliverable as "a component that handles user authentication," ask it to enumerate the exact flows, the error states, and the external services it will touch. Vague acceptance criteria in the SoW become the exact bugs you file two weeks later.

Two interlocking gears, one silver and one yellow, on a white grid.

Phases as contracts

Once the SoW is tight, we break the work into discrete phases. Each phase has a clear input, a clear output, and an explicit handoff point where a human reviews before the agent proceeds.

A typical module build might look like this:

Phase 1. Data model. The agent produces the schema, migration files, and a brief written rationale for each design decision. We review. We either approve or we ask for changes. The agent does not touch business logic until this phase is signed off.

Phase 2. Core logic. Given the approved schema, the agent implements the service layer or equivalent. No UI. No integration wiring. Just the logic, with tests.

Phase 3. Integration. The agent wires the logic to whatever it needs to talk to: an API, a queue, a third-party SDK. This phase tends to surface the most surprises, which is exactly why we isolate it.

Phase 4. UI or surface. If there is a user-facing layer, it comes last. It is the easiest part to change and the least expensive to redo.

Each phase ends with a human checkpoint. The agent knows this upfront because we state it in the SoW. It does not skip ahead. If it tries to, that is a signal the phase boundaries were not explicit enough.

The gotcha for this structure. phase boundaries only hold if the agent maintains isolated context per phase. In practice, this means starting a fresh conversation at each handoff, or being very deliberate about what prior context you carry forward. Agents that retain the full session history will sometimes reference earlier decisions and drift back toward the broader plan. Clean context cuts are worth the friction of reintroducing relevant background manually.

What this does to the team

The process changes what developers do, not what they are for.

In a phase-gated, SoW-driven workflow, the developer's job shifts toward three things: writing the brief precisely, evaluating the plan critically before work begins, and reviewing outputs at each gate with enough technical depth to catch subtle errors.

None of those are passive. In some ways they demand more from a developer than sitting down and writing the feature themselves, because the judgment calls are explicit rather than embedded in keystrokes.

We have found that senior developers adapt to this quickly. They already think in terms of contracts and interfaces. Junior developers sometimes struggle with the review gates because they lack the pattern recognition to spot a plausible-but-wrong implementation. This is worth knowing before you staff an agent-heavy project. The agent raises the floor. It does not replace the ceiling.

There is also a team dynamic worth naming. Studios that assign agent-assisted tasks without review gates tend to lose visibility into what is actually being built. The code exists. It may even work. But nobody on the team can explain the architectural decisions, because the agent made them in a black box and nobody reviewed the plan. That is a delivery risk and a maintenance risk that compounds over the life of the project.

An abstract geometric structure made of connecting yellow and silver triangular prisms.

The agent raises the floor. It does not replace the ceiling. Senior judgment still decides whether the plan is sound before a line of code is written.

Max Pinas, founder, Studio Hyra

The process as a pitch artifact

One thing we did not expect. the SoW document became useful outside the build itself.

When a client asks how we work with AI tools, we can show them the SoW from the planning phase. It demonstrates that we are not just prompting and hoping. It shows a methodology. It shows scope discipline. It shows that the agent's role is bounded, not open-ended.

For agencies pitching AI-assisted delivery to clients who are nervous about what agents actually do inside a project, that transparency is worth something concrete. A two-page planning document produced before any code exists is more persuasive than any claim about process maturity.

If your agency is adopting agent-assisted coding and wondering how to explain it to clients without sounding like you are just throwing prompts at a problem, start here. Write the statement of work first. Write it with the agent. Show it to the client. The conversation that follows is almost always more productive than any slide you could have put in front of them.

That is not a sales technique. It is just what happens when the planning artifact does its actual job.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

OpenAI wants to sell ads inside ChatGPT answers. Are you ready?

OpenAI is building a formal ad product inside ChatGPT conversations. Here is what it demands of brands and agencies before the platform opens.

May 9, 2026
Technology6 min read

OpenAI wants to sell ads in ChatGPT. Here is what that breaks.

OpenAI is building an ads layer inside ChatGPT. For brands that spent the last two years earning visibility in AI answers, this changes the rules entirely.

May 9, 2026