A new Anthony Bourdain documentary has sparked controversy after it was revealed that a portion of the lines heard in the late host’s voice were actually faked using voice synthesis AI.
Documentary filmmaker Morgan Neville doesn’t care what you think. He’s made that clear in basically every interview he’s given over the past several days, as he’s stomped through the media rounds to make the point that he’s not sorry he used a so-called “deepfake” of Anthony Bourdain’s voice in his latest film. In his opinion, he has nothing to be sorry for.
“If you watch the film… you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said in an interview with the New Yorker. “We can have a documentary-ethics panel about it later.”
Since his flippant reaction to outcry, however, the film’s production company has made it clear that just three lines in the documentary were faked in this way, and that all three lines were quotes from Bourdain’s own writing. Perhaps defending himself just a bit, Neville is quoted in the same interview as saying that he “wasn’t putting words in [Bourdain’s] mouth, just trying to make them come alive.”
The outcry seems fairly one-sided in this case, since the documentary directly implied that these were real recordings of Bourdain’s voice, and since Neville’s claim to have consulted with Bourdain’s estate seems to be false.
But what about narrative film?
We all remember various CG versions of deceased actors in recent films, but those were always done with explicit buy-in from the actors’ estate. In Star Wars: Rise of Skywalker, filmmakers wove posthumous CG moments with Carrie Fisher together with real footage of her performance, broadly mimicking the real-fake mixture of audio in this documentary.
The difference, of course, is that Star Wars is a fictional franchise, while documentaries purport to show only truth. The creators of Star Wars also made the fact of Fisher’s CG-ification into a major media push; a combination of their decision to heavily publicize their use of CG, and the fact that these scenes are visually easy to distinguish from real footage, means that nobody felt lied-to over these scenes. Various viewers felt that they may have been cheesy or in poor taste, but nobody felt like the moments were intended to dupe them into believing that Fisher had actually delivered all of these performances while alive.
So, if Neville had come right out and made a point of the fact that AI lines were used, rather than treating it like a shameful secret to be uncovered, would there have been a backlash at all? Or, if Fisher’s CG likeness had been photo-realistic enough to fool most viewers, would it have then become inappropriate?
Anthony Bourdain is maybe the worst possible person to deepfake
Viewers felt a strong emotional connection to Bourdain; he was able to evoke a feeling that you knew him, and that he was being uncommonly open and honest with his viewers. This is probably why the AI voice controversy has hit so hard, since his fans feel that the authenticity of his character is his entire persona; turn even a few of his lines into those of a robot and, in many peoples’ view, you are perverting his legacy.
There’s also the fact that many believe Bourdain would have hated this sort of accelerationist use of technology. He often argued against modern trends he saw as damaging, and lived a very dad-ly existence, with only partial incorporation of technology into his life. He was the sort of guy to shake his head at a new social media feature – so what can we conclude he would have thought about this?
Still, these are Bourdain’s own written words coming out of his robotic mouth. If he was alive, Bourdain could have live-recorded his own written words without controversy – so it’s worth asking whether the AI-voice delivering those same words is changing all that much, at all.
It’s not just the deceased who can be (deep)faked
The film industry is going to need to deal with the fact that AI-generated version of actors will get cheaper and cheaper, relative to the real thing. First might come voice acting – pay Ellen DeGeneres a few million to have her sit and record half the words in the English dictionary, then build any number of cartoon character performances out of that. Hell, if a Western actor recorded enough Japanese phonemes, their voice could star in commercials while fluently speaking a language they know nothing about.
Fakes will always be less controversial in ads and fictional films than in news and documentary content. Could news organizations deepfake a celebrity or political figure into reading a controversial written tweet in their own voice? Would this be slander, or simply assistance to their audience in hearing the personal reality of what was written? Bear in mind that commenters have been critical of the effect of even slight changes in colorization; creation of all-new lines is sure to ignite passions in both creators and viewers for quite some time to come.
The industry clearly has a long way to go before it can make a hard decision on these issues, especially given how much money could be made/lost as a result. Until it does, however, it’s the most strident directors and producers, like Neville, who will push things forward – whether the rest of us like it or not.
Do you think that deepfakes have any place in narrative film? What about in documentaries such as this? Let us know in the comments below!