Menu Menu
[gtranslate]

Explicit Taylor Swift AI images cause legal panic

Last week, explicit images of Taylor Swift created using AI were shared across Twitter (X), with some posts gaining millions of views. The ensuing legal panic could have ramifications for the use of celebrity likeness, and AI images in general.

Taylor Swift was the victim of AI image generation last week. Explicit, pornographic images were created without her consent and shared across X (formally Twitter) by thousands of users.

The posts were viewed tens of millions of times before they were removed and scrubbed from the platform.

The ensuing fallout has been swift, with X tweaking its censorship filters over the weekend to remove any mention of the images. US politicians are calling for new laws to criminalise deepfakes as a direct result, and Microsoft has committed to pushing more guardrails on its Designer IP app in order to prevent future incidents.

These latest developments in deepfake controversy follow many years of unethical pornographic content online, most of which strip victims of autonomy over their likeness. It’s a problem that concerns both celebrities and ordinary people alike, as AI tools become more commonplace and accessible for anyone and everyone to use.

Taylor’s high-profile status and devoted fanbase has helped push this issue to the forefront of current news, and will no doubt alert policy makers, social media platforms, and tech companies in a way that we haven’t seen up until now.

While change and stricter laws should have been put into place a long while ago, we’re likely to see much-needed progress in the coming weeks and months. Ramifications could be wide-reaching and affect AI image generation in general – not just celebrity likenesses or explicit content.


How could the legal fallout affect AI image generation in the future?

So, what specifically is happening legally to combat Taylor’s deepfake AI content?

On Tuesday, a bipartisan group of US senators introduced a bill that would criminalise the spread of nonconsensual, sexualised images generated by AI. This would allow victims to seek a civil penalty against ‘individuals who produced or possessed the forgery with intent to distribute it’.

In addition, anyone receiving images or material knowing they were not created with consent would also be affected.

Dick Durbin, US Senate majority whip, and senators Lindsey Graham, Amy Klobuchar, and Josh Hawley are behind the bill. It is being called the ‘Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024’, or ‘Defiance Act’ for short.

Taylor’s explicit photos also reached the White House. Press Secretary Karine Jean-Pierre told ABC News on Friday that the government were ‘alarmed by reports of circulating images’.

All this follows on from another bill called the No AI FRAUD Act which was introduced on January 10th, 2024.

This aims to create a ‘federal, baseline protection against AI abuse’, and uphold First Amendment rights online. It places particular emphasis on an individual’s rights to their likeness and voice against AI forgeries. If passed, the No AI FRAUD Act would ‘reaffirm that everyone’s likeness and voice is protected and give individuals the right to control the use of their identifying characteristics’.

Of course, legal protection is one thing, actually enforcing it on a net-wide basis is another. There is also the contentious issue of free speech and expression – and determining where punishment lies.

Should mediating software platforms that make AI images possible be restricted or punished? We don’t limit Photoshop, for example, despite it being a prominent tool to create misleading images. Where should AI companies stand legally here? It’s currently unclear.


Why should we be worried about non-consensual AI image generation? 

There are a ton of issues that come with AI image generation and nonconsensual deep faking, most of which are probably obvious.

Most AI-generated pornography is of women and is done without the subject’s consent. A 2019 study found that 96% of all deepfake videos were explicit, featured women, and were done without approval. That is a huge majority and indicates that AI-generated content is largely rooted in overt misogyny and sexual harassment.

To put it plainly, legal frameworks have failed to keep up with AI content and the risks associated with it. Women are increasingly under threat, whether it be top-tier celebrities like Taylor Swift or unassuming members of the public. This potential risk is continually increasing too, as software becomes easier to use and more readily available.

We already know that there are many communities online, often via forums such as 4chan, Telegram, and Discord, that generate and circulate pornographic content of famous women without permission. It is unregulated and almost impossible to entirely crack down on, regardless of any well-meaning US Congress bill.

All of this is without considering the consequences of deep-fake imagery and video in our political spheres, where misinformation will likely become even more rampant than it already is. With no proper legal liability for identity theft online, how can we stop any celebrity from saying or doing anything?

It’s hard enough getting our grandparents to dissect what’s real online and what isn’t. Now that AI has exploded in audio, video, and even copywriting, understanding human content and bot content is almost an incomprehensible minefield. It is failing women, eroding ownership over our own likeness, and ultimately damaging our ability to communicate online.

Taylor Swift may be spearheading this issue for now, but the reality is that deepfake content and AI pornography has been circulating the internet for a long while, festering in seedy corners of forums and being allowed to go unchecked. It is a huge, substantial problem, that can’t be fixed with a single bill or motion.

More needs to be done to protect everyone, not just the ultra-famous.

Accessibility