Menu Menu
[gtranslate]

Study reveals explicit deepfakes are rife among US teens

One in eight US teenagers under 18 personally know someone who has been targeted with pornographic deepfakes. What is being done about this privacy crisis?

We all know what deepfakes are at this point. Generative AI is adept at accurately capturing a person’s likeness and superimposing it over existing photos or videos. A mere text prompt and a handful of JPEGs are sufficient to do so.

The nefarious potential of the technology was discussed way back in 2019. While fret of faux propaganda manipulating world politics was exaggerated, assertions that deepfake pornography could create a privacy crisis have been vindicated. In-fact, roughly 98% of all deepfake content today falls under this bleak category.

The latest report from Thorn, a company focused on child safety solutions, underscores the scope of the problem. The salient statistic is that one in eight US teens (under 18) claim to personally know someone who has been victimised by deepfake pornography.

This data was obtained through an online survey of 1,200 people aged between 13 and 20 back in October 2024, including 700-plus teenagers between 13 and 17. Of the respondents, 48% identified as male, 48% as female, and four percent as gender minority.

As many as one in 17 said they had fallen victim to deepfake images or videos themselves. Given the average class size in US public schools is around 18 students, that means at least one person per classroom has been directly affected.

Previous research from the same organisation found that one in 10 minors (aged 9 to 17) had heard of instances where their peers used AI to generate porn of other students. Unsurprisingly, girls were targeted in the overwhelming majority of cases across both papers.

The act of superimposing someone’s likeness onto an AI creation or real life content, by default, is a violation of the subject’s consent, autonomy, and privacy. The vast majority of all intimate deepfakes, therefore, are recognised as illicit – the volume of which is doubling every six months.

When it comes to the safety of adolescents, or even children, the stakes are higher for obvious reasons – or so you’d like to think. In reality, the lack of a national blanket legislation to tackle the distribution of deepfakes is failing to stem the spread on all fronts, including in schools.

How ‘offences’ are handled depends on each institution and state, with many still yet to establish their own set of definitions and penalties. In Virginia, creating and sharing sexual deepfakes is usually considered a Class 1 misdemeanour, but in Louisiana, the same thing may carry a prison sentence of up to 10 years.

In cases where both the victims and perpetrators are minors, the legal terms become even murkier. Experts say arrests and charges under these circumstances are incredibly rare.

‘My guess [as to why] is just that there may be a general recognition that arresting children isn’t going to resolve this,’ said Riana Pfefferkorn, policy fellow at the Stanford Institute for Human-Centered AI, in a conservation with Forbes.

The proliferation of generative AI is accelerating at such a breakneck pace that it’s incredibly difficult to draft legal protections for citizens. The misconception that illicit deepfake toolkits are only available on the black market is miles from the truth, too. Dozens are continually spawning on accessible websites and app stores.

We’re without doubt fighting an uphill battle to claw any semblance of control over this privacy nightmare, but there is hope a bipartisan bill may help to safeguard minors and adults alike in the near future.

A quartet of senators from different states, fervently backed by First Lady Melania Trump, are pushing the ‘Take It Down Act’, which aims to criminalise the distribution of non-consensual intimate images (including deepfakes) across the nation.

Crucially, it would hold host platforms accountable for publishing such material as well as individual perpetrators. Failure to comply and remove content within 48 hours of a report, the terms read, would result in hefty fines courtesy of the Federal Trade Commission.

The Senate unanimously approved the Take It Down Act last month and it remains under consideration in the House of Representatives, where a strong collective appetite is creating encouragement that it will pass.

Could the US be about to resign this hideous chapter to digital history? We truly hope so.

Accessibility