HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
Your brand identity needs a paper trail now
Design6 min read

Your brand identity needs a paper trail now

April 30, 2026

Authenticity used to be a brand feeling. Something you built over years through consistent choices: the right tone, the right visual weight, the right people speaking on your behalf. That era is over. Authenticity is now an operational problem, and most brand teams are still treating it like a creative one.

Deepfake-as-a-service is mature. You can license a convincing synthetic voice for a few dollars a month. You can clone a public face without specialist knowledge. And for the first time, the legal and technical infrastructure around this is catching up fast. On 19 May 2025, the TAKE IT DOWN Act was signed into law in the US, making non-consensual synthetic intimate imagery a federal crime. The C2PA standard, backed by Adobe, Microsoft, Google and others, is becoming the closest thing the industry has to a provenance layer for digital content. These are not fringe developments. They are the beginning of a new compliance surface for anyone who manages a brand.

A large, isolated iridescent sphere reflecting a vibrant gradient, set against a dark void.

53% of media professionals name synthetic content their biggest brand safety challenge this year. That number will not go down.

Tom Spel, co-founder, Studio Hyra

The three things that actually matter right now

I want to be specific, because this topic attracts a lot of vague advice. Here is what brand teams should be working on before the end of 2026, in order of urgency.

First. sign your own content.

If you publish video, audio, images or written content at any meaningful volume, you need a provenance pipeline. C2PA lets you embed cryptographically signed metadata into a file at the moment of creation or export. That metadata travels with the file and can be verified by any compatible reader. Think of it as a chain of custody for your content.

The gotcha here is that most publishing workflows strip metadata. Your CMS probably does. Your CDN might. Social platforms routinely do. Implementing C2PA is not just a technical decision, it is a workflow audit. You need to map every point where a file is touched between creation and publication, and find out where the signature breaks. That process takes longer than people expect, and it surfaces problems in tooling that teams have been ignoring for years.

Watermarking adds a second layer. Unlike metadata, a watermark embedded into the signal of an image or audio file survives most post-processing. Tools from companies like Imatag and Digimarc operate at this level. The two approaches are complementary: metadata for verification by platforms and partners, watermarks for forensic recovery after a file has been shared, compressed or re-exported.

Second. build a legal frame for your likeness, your voice and your visual identity.

This is the one most brand teams skip because it feels like a legal problem, not a design problem. It is both.

If your brand uses a recognisable spokesperson, a distinctive voice, or a visual style that could be replicated by a generative model, you need written position on what rights you hold, what you have licensed to others, and what constitutes infringement. That documentation does not have to be long. It has to exist.

The harder conversation is about talent and collaborators. If you have worked with a designer, a voice artist or a photographer whose work has shaped your visual identity, do your contracts address synthetic reproduction? Most contracts signed before 2022 do not. That gap is now a liability.

For brands that use AI to generate content internally, the question flips. What synthetic assets are you producing? What claims can you make about their origin? If a competitor or a journalist asked to see the provenance of your content, what would you show them?

Third. watch what is being said about you, synthetically.

Detection is the part nobody wants to budget for until something goes wrong. A synthetic audio clip of your CEO saying something damaging. A deepfake product demo that circulates as genuine. A generated news article quoting a spokesperson who never spoke. These are not hypotheticals. They are happening to mid-market brands right now, not just to celebrities and politicians.

A detection strategy does not require exotic tooling. It starts with systematic monitoring: alerts on your brand name combined with terms like 'video', 'audio', 'interview', 'statement', across the channels where your audience spends time. Layer in periodic manual review of high-traffic mentions. For brands with significant public profiles, third-party monitoring services that flag synthetic content are worth the cost.

The gotcha with detection is speed. The damage from a synthetic clip often happens in the first four hours after it spreads. A detection strategy that surfaces something in 72 hours is not a strategy. You need a response protocol sitting next to the monitoring: who decides, who speaks, what the statement looks like, and how you distribute the correction through the same channels the original fake used.

A series of interconnected iridescent tubes forming a flowing, abstract tunnel or pathway.

Why this is a design problem, not just a legal one

I have written elsewhere about what authentic design looks like in a world flooded with generated imagery. The aesthetic argument and the operational argument are related.

A brand with a genuinely distinctive visual language is harder to clone convincingly. Specificity is a form of protection. Generative models are trained on the broad middle of what exists. The more your brand lives in that middle, the more plausible a synthetic version of your content becomes to an average viewer. A strong, specific visual identity is not just better design. It is harder to fake.

This is why the creative and the operational need to be built together. A provenance pipeline attached to generic content is easier to undermine. A signed, watermarked piece of content with a visual signature that your audience has learned to recognise is much more defensible.

The same logic applies to voice and tone. A brand with a flat, committee-approved communications style can be replicated in seconds. A brand with genuine editorial character, built over time, is harder to imitate because the failure modes are more obvious to the people who know it well.

A stack of iridescent, rounded abstract blocks, casting soft, colorful glows.

Specificity is a form of protection. Generative models are trained on the broad middle. The more your brand lives in that middle, the more plausible a synthetic version of your content becomes.

Tom Spel, co-founder, Studio Hyra

Where to start if you have not started

Do not try to do all three pillars at once. Pick the one with the most immediate exposure.

If you publish a lot of video or audio content, start with the provenance pipeline. Map your workflow first. Find where metadata dies. Then introduce signing and watermarking at the production stage before the file touches your publishing system.

If you use recognisable talent or have a well-known visual identity, start with the legal frame. Pull your existing contracts. Identify the gaps. Write a one-page internal policy on synthetic reproduction of your brand assets. That document becomes the foundation for everything else.

If you are a brand that operates in a high-attention space, where audiences are already primed to believe damaging stories about you, start with detection. Set up monitoring this week. Write a response protocol before you need one.

None of this is a one-time project. The tooling will change. The legal environment will change. C2PA adoption across platforms will grow unevenly. The TAKE IT DOWN Act will be interpreted by courts in ways nobody can predict yet. The right posture is to build a lightweight, reviewable system now and improve it quarterly, not to wait for the perfect solution.

Authenticity was never just a feeling. It was always also a practice. The difference now is that the practice has stakes attached to it, and the clock is running.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Design6 min read

Your AI assistant has a name. It doesn't have a voice yet.

Creative Review asked how studios brand AI assistants. The real problem the piece surfaces is not visual, it is verbal. Naming, tone, and error messages outlas

Apr 29, 2026
Design9 min

Authentic Design vs AI Synthetic Content

When every brand can generate flawless visuals in seconds, perfection stops landing. Here is why the smartest brands are designing imperfection on purpose.

Aug 29, 2025