The three things that actually matter right now
I want to be specific, because this topic attracts a lot of vague advice. Here is what brand teams should be working on before the end of 2026, in order of urgency.
First. sign your own content.
If you publish video, audio, images or written content at any meaningful volume, you need a provenance pipeline. C2PA lets you embed cryptographically signed metadata into a file at the moment of creation or export. That metadata travels with the file and can be verified by any compatible reader. Think of it as a chain of custody for your content.
The gotcha here is that most publishing workflows strip metadata. Your CMS probably does. Your CDN might. Social platforms routinely do. Implementing C2PA is not just a technical decision, it is a workflow audit. You need to map every point where a file is touched between creation and publication, and find out where the signature breaks. That process takes longer than people expect, and it surfaces problems in tooling that teams have been ignoring for years.
Watermarking adds a second layer. Unlike metadata, a watermark embedded into the signal of an image or audio file survives most post-processing. Tools from companies like Imatag and Digimarc operate at this level. The two approaches are complementary: metadata for verification by platforms and partners, watermarks for forensic recovery after a file has been shared, compressed or re-exported.
Second. build a legal frame for your likeness, your voice and your visual identity.
This is the one most brand teams skip because it feels like a legal problem, not a design problem. It is both.
If your brand uses a recognisable spokesperson, a distinctive voice, or a visual style that could be replicated by a generative model, you need written position on what rights you hold, what you have licensed to others, and what constitutes infringement. That documentation does not have to be long. It has to exist.
The harder conversation is about talent and collaborators. If you have worked with a designer, a voice artist or a photographer whose work has shaped your visual identity, do your contracts address synthetic reproduction? Most contracts signed before 2022 do not. That gap is now a liability.
For brands that use AI to generate content internally, the question flips. What synthetic assets are you producing? What claims can you make about their origin? If a competitor or a journalist asked to see the provenance of your content, what would you show them?
Third. watch what is being said about you, synthetically.
Detection is the part nobody wants to budget for until something goes wrong. A synthetic audio clip of your CEO saying something damaging. A deepfake product demo that circulates as genuine. A generated news article quoting a spokesperson who never spoke. These are not hypotheticals. They are happening to mid-market brands right now, not just to celebrities and politicians.
A detection strategy does not require exotic tooling. It starts with systematic monitoring: alerts on your brand name combined with terms like 'video', 'audio', 'interview', 'statement', across the channels where your audience spends time. Layer in periodic manual review of high-traffic mentions. For brands with significant public profiles, third-party monitoring services that flag synthetic content are worth the cost.
The gotcha with detection is speed. The damage from a synthetic clip often happens in the first four hours after it spreads. A detection strategy that surfaces something in 72 hours is not a strategy. You need a response protocol sitting next to the monitoring: who decides, who speaks, what the statement looks like, and how you distribute the correction through the same channels the original fake used.