Menu Menu
[gtranslate]

Netflix’s undisclosed AI use sparks debate on media manipulation

Netflix has come under fire for including undisclosed AI imagery in a new true crime documentary. The backlash has sparked important conversations around media manipulation. 

Like it, fear it, or loathe it, AI is now a part of our everyday lives. From education to entertainment, artificial intelligence is only set to keep growing, shaping nearly every aspect of our interaction with the media.

But these changes call for boundaries. Both AI’s unpredictability and unprecedented growth make it a dangerous tool if not used delicately. And in recent days, two huge media companies have come under fire for failing to do just that.

Just days after studio A24 faced criticism for using AI posters to promote the new film ‘Civil War’, Netflix has started a heated debate about the use of artificial intelligence in TV and film, especially when viewers aren’t made aware of it.

The streaming service was found to have used AI imagery of convicted murderer Jennifer Han, as part of their new true crime documentary ‘What Jennifer Did.’ The film follows police as they investigate the brutal killing of Han’s mother and the attempted murder of her father, initially believing Jennifer to be an innocent witness.

She eventually confessed to orchestrating a hire-to-kill on her parents in an effort to inherit their money.

Around 28 minutes into the film, Pan’s high school friend, Nam Nguyen, describes her as ‘great to talk to.’

‘Jennifer, you know, was bubbly, happy, confident, and very genuine,’ he explains, as three images of Pan appear on screen.

The photos have since come under scrutiny, as objects in the background appear distorted, as well as details in Jennifer’s hands, ears, and hair.

Viewers have accused Netflix of using AI to manipulate them, presenting a false idea of Jennifer Han that would add to the film’s dramatic climax.

‘Using AI to manipulate the viewer during nonfiction content is definitely top shelf evil’ said one Instagram user. The overwhelming consensus amongst Netflix’s audience has been negative, with many highlighting the irony of using false information during a film about uncovering ‘the truth.’

While Netflix has yet to comment on the backlash, the decision to use AI-generated imagery without disclosure raises serious ethical questions about the responsibilities of content creators.

By presenting AI-generated images as real, content creators risk distorting the truth and perpetuating misinformation. In the case of ‘What Jennifer Did,’ viewers may have formed inaccurate perceptions of Jennifer Han based on the fabricated visuals presented in the documentary.

The manipulation of visuals through AI also raises concerns about the erosion of trust in the media. When viewers can no longer differentiate between real and artificial imagery, the credibility of mainstream media comes into question. Trust is the foundation of journalism and filmmaking, and incidents like this only serve to undermine it.

 

View this post on Instagram

 

A post shared by MODERNNOTORIETY (@modernnotoriety)

Some internet users have suggested the rise of AI in popular culture, particularly its undisclosed appearance in documentary content, signals the end of the Information Age.

While laws are currently in place to regulate the use of AI across the US and Europe, there is currently no specific legislation governing the use of AI images or video in documentaries or other content.

The unchecked use of AI has the capacity to impact those working in the industry, too. As artificial intelligence technology continues to advance, there’s a growing fear that traditional roles in filmmaking, such as makeup artists and visual effects specialists, may become obsolete.

And if AI-generated imagery becomes the norm, it could have profound implications for the future of employment within entertainment media.

When companies fail to inform their audiences that AI is present within content, the capacity for audiences to differentiate between what’s real and what’s fake is degraded. As such, the validity of AI as an entertainment tool is cemented in the minds of content creators.

Producers and media companies have a responsibility to disclose the use of artificial imagery and ensure that viewers are aware of any manipulations or alterations made to visuals. Transparency fosters trust and accountability, essential elements in maintaining the integrity of the content we consume every day.

Accessibility