Fraudulent activity in the financial industry is nothing new. The techniques employed by fraudsters have ranged from fake check fraud and credit card fraud to identity theft and financial account takeovers. For years, financial institutions have implemented a variety of measures to combat these frauds. Now, thanks to the proliferation of sophisticated and low-cost artificial intelligence (AI) technologies, financial institutions are facing tougher challenges than ever before. One of the biggest challenges facing financial institutions today is that methods commonly used to prevent fraud, such as phone or video calls, are now being used by criminals to perpetrate fraud using deepfakes. This article will discuss the risks financial institutions are facing from these technologies and provide some practical solutions you should consider as you continue to develop your fraud prevention program. 

What are Deepfakes?

Deepfakes is the use of a type of AI called deep learning to create fake images, audio, or videos of events or individuals. Many of you have likely seen this technology in action in recent months. The examples are far-ranging, from videos of Mark Zuckerberg claiming “we own you” and Morgan Freeman explaining that “I am not Morgan Freeman” while providing an overview of synthetic reality, to Jon Snow bemoaning “I’m sorry we wasted your time” and apologizing for the ending of Game of Thrones (explicit). Videos are not the only content being faked. You’ve also likely seen the viral fabricated images of Pope Francis wearing a floor-length white puffer jacket and fake images of former President Donald Trump’s arrest. Complicating matters even more, there are a variety of low-cost tools that can generate deepfake audio content, such as a fraudulent voicemail that was recently used in an attempt to obtain a fraudulent money transfer.