Taylor Swift’s latest US trade mark filings, covering two short voice clips (“Hey, it’s Taylor Swift” and “Hey, it’s Taylor”) and a distinctive Eras Tour image, have attracted significant attention within IP circles. On one level, they continue Swift’s well-established, commercially astute brand protection strategy. On another, they are a direct response to a more disruptive force: the rapid rise of AI-generated deepfakes.

Deepfakes as an IP problem

Deepfakes, synthetic audio, images and videos that convincingly mimic real individuals are no longer a fringe concern. For high-profile figures such as Swift, they have already resulted in a wide spectrum of unauthorised uses, including fake endorsements, explicit imagery and misleading political content.

From an IP perspective, the difficulty is structural. Much of this content does not reproduce an existing work; instead, it generates something new that is merely highly reminiscent of the original. This creates a gap in traditional IP protection frameworks: infringing effects without clearly infringing acts.

The result is a growing disconnect between harm (clear reputational, commercial and brand damage) and enforceability (limited ability to bring straightforward claims).

Why existing rights struggle

Deepfakes expose limitations across the core IP toolkit:

  • Copyright protects original works, but not a voice, likeness or “style” in the abstract. AI outputs can therefore avoid infringement while still trading off a recognisable identity.
  • Passing off and trade marks (in their traditional form) require misrepresentation and use in the course of trade, thresholds not always met by viral or non-commercial deepfake content.
  • Personality/publicity rights (stronger in the US than the UK) remain fragmented and territorially limited, which is ill-suited to globally disseminated AI content.

In the UK in particular, the absence of a standalone personality right means that claimants must rely on a patchwork of causes of action, none of which were designed with synthetic identity in mind.

Trade marks as part of the solution

Against that backdrop, Swift’s filings can be understood as a pragmatic attempt to bridge rather than close this gap. By seeking protection for specific, recognisable elements of her persona, she may be able to:

  • challenge uses that are “confusingly similar” to her registered marks;
  • rely on a clearer and more uniform enforcement mechanism, particularly in the US;
  • target deepfakes used in commercial contexts, such as advertising, endorsements or branded content.

This is a subtle but important shift. Trade mark law is concerned with origin and consumer perception. In cases where AI-generated content suggests endorsement or affiliation, that framework may be easier to engage than copyright.

Swift’s strategy is also deliberately narrow. Rather than attempting to monopolise her voice as such (which would be legally ambitious), she has focused on short, distinctive spoken phrases, exactly the kind of material that can function as a badge of origin in commercial use.

Sound marks and synthetic identity

While sound marks themselves are not new, their application to a human voice in this way remains relatively untested. The filings therefore sit at the edge of existing doctrine, raising questions as to how far trade mark law can stretch to accommodate AI-driven risks.

That said, they also reflect a broader trend. Other public figures, including Matthew McConaughey, have pursued similar filings, signalling a growing willingness among brand owners to use trade marks creatively to address AI misuse.

Why this matters for brand owners

Swift’s approach highlights a broader commercial reality: deepfakes are not just a reputational or ethical issue—they are increasingly an IP and monetisation issue. In particular:

  • Commercial exploitation: AI-generated endorsements or branded content can divert value from rights holders.
  • Erosion of distinctiveness: repeated unauthorised uses risk weakening the commercial strength of a brand identity.
  • Licensing pressure: where convincing imitations are readily available, the incentive to obtain legitimate licences may diminish.
  • Enforcement at scale: infringing content can be created and disseminated rapidly, often across multiple jurisdictions.

Viewed in this light, deepfakes challenge a core premise of IP law: that creators and brand owners can control and commercialise identifiable intangible assets.

A developing and imperfect tool

It is important not to overstate the reach of trade marks. Their effectiveness will depend on context, particularly whether the impugned use occurs “in the course of trade” and gives rise to consumer confusion (and/or unfair advantage in the case of reputed marks).

Equally, trade marks do not prevent the creation of deepfakes per se. At best, they provide a mechanism to address certain downstream uses. The extent to which courts will be willing to apply trade mark principles to AI-generated identity remains to be tested.

Nevertheless, these filings are likely to have a signalling effect. They demonstrate that rights holders are actively adapting their IP strategies to technological change, and may encourage others to follow suit.

Practical takeaways

For UK practitioners and brand owners, a few points emerge:

  • Non-traditional marks (including sound and image marks) may play an increasingly important role in brand protection strategies.
  • Deepfake risk should form part of IP audits, particularly where a business relies on personal brand or recognisable identity.
  • Layered enforcement remains essential: trade marks, passing off and (where applicable) copyright will need to be deployed in combination.
  • Watch for test cases: the boundaries of trade mark protection in the AI context are likely to be litigated in the near future.

Conclusion

Swift’s trade mark filings are not just a headline-grabbing development; they are a reflection of a deeper shift in how identity is protected in the digital economy.

As deepfakes become more sophisticated, the line between imitation and infringement is increasingly blurred. Trade mark law, traditionally concerned with origin, may prove to be an adaptable, if imperfect, tool in addressing that challenge.

For IP lawyers, the message is clear: the protection of “brand” is no longer limited to names and logos. In the age of AI, it extends to voice, image, and the very idea of identity itself.

If you would like to discuss how these developments may affect your business, or to explore practical steps for protecting your brand, content or identity against AI-generated misuse, please get in touch with Hannah Simpson at [email protected] or anyone from the rest of the Edwin Coe IP team. We would be happy to advise on trade mark strategy, enforcement options and broader risk management where deepfakes or synthetic media present a concern, particularly for clients whose value is closely tied to reputation, personality or distinctive brand assets.

Please note that this blog is provided for general information only. It is not intended to amount to advice on which you should rely. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content of this blog. Please also see a copy of our terms of use here in respect of our website which apply also to all of our blogs.

© 2025 Edwin Coe LLP