When I discussed deepfakes in an earlier column, I conceptualized their uses as primarily associated with video: Videos made with artificial intelligence that purported to show people doing and saying things they had never done or said. I posited the upcoming election cycle and videos purporting to provide evidence of criminal conduct as particular areas for mischief: Pictures worth far less than a thousand words. But as with so much emerging technology, uses evolve. The recent documentary of Anthony Bourdain’s life and suicide, “Roadrunner”, includes three audio clips purporting to be Bourdain speaking, but that are in fact made using artificial intelligence. Audio deepfakes. Deepfakes allow us to make the dead speak, or at least to imitate them doing so.
The director, Morgan Neville, told The New Yorker and GQ that there are three audio clips of Bourdain’s voice included in “Roadrunner” that were made using artificial intelligence. How good are they? Good enough that no one knew any had been included until the New Yorker and GQ interviews revealed it. And good enough that while the director has identified one of the clips, there is only speculation as to what the other two might be. Some say—when studied—the one identified clip sounds a little different. I urge you to listen for yourself—it was not apparent to my ear.