A recent video of what appeared to be Tom Cruise appeared on TikTok and immediately made headlines—not because of who the video starred, but rather because of who it did not star. The video portrayed an actor who, with the help of machine-learning, appeared and sounded strikingly similar to Tom Cruise. Daniel Victor, Your Loved Ones, and Eerie Tom Cruise Videos, Reanimate Unease with Deepfakes, New York Times (March 18, 2021).
The Tom Cruise video is what is known as a “deepfake,” a term coined by a Reddit user to describe increasingly realistic video imitations that are made using machine learning algorithms and available media. Aja Romano, Jordan Peele’s Simulated Obama PSA Is a Double-Edged Warning Against Fake News, Vox (April 18, 2018); see also Peter Brown, Three Cheerleaders Victimized by Deepfake Videos, N.Y.L.J. (March 22, 2021). Though deepfakes are not new, their realism is improving exponentially. With improved realism comes the inability of viewers to distinguish fact from fiction. As the line between reality and fiction grows blurrier by the day, it is only a matter of time before viewers act on information provided through a deepfake to their detriment. And, where corporations are involved—even if they were the subject of the misinformation—viewers may look to the corporation to make them whole for any financial injuries. Thus, for corporations seeking to reduce cyber risk and protect their bottom line, the question of whether insurance will respond to such fact patterns is not something that should be deferred to the future. The artificial synthesis of video has the potential to wreak great havoc on individuals, business, and society. Businesses need to be prepared for this disruption and consider whether insurance assets will respond.