On New Yearâs Eve 2023, Brian Quintero discovered that cybercriminals had accessed his bank account through his online app, and emptied all the money he had – approximately USD 760. Mr. Quinteroâs neobank indicated that it was likely that cybercriminals had illegally used artificial intelligence (AI) to generate movement in photographs of Mr. Quinteroâs face, allowing them to bypass the appâs facial recognition protocols.
Unfortunately, Mr. Quinteroâs experience had been just one of several thousand seen by fintechs worldwide and their account holders in the preceding months. It seems that whatever form of authentication fintech companies adopt (in this case, biometrics, which is widely considered the gold standard when it comes to delivering both superior security and user convenience), insidious fraudsters will find a way around it.
Or will they?
Presentation and Injection Attacks
First, letâs start by understanding the two main types of attacks posing a threat to biometric authentication systems. The first is known as a âpresentation attackâ – a deliberate attempt to deceive a biometric authentication system by presenting fake or altered biometric data, such as facial images, fingerprints, voice or iris, to a deviceâs camera or microphone. These attacks can take many forms, including presenting printed photos, digital images, lifelike masks, or videos of someone elseâs face directly to the camera. The ready and public availability of such content on social media sites makes it very easy for nefarious actors to pull such materials and present them for fraudulent authentication attempts.
At the same time, generative AI (GenAI) can create highly realistic deepfakes, which can be used to perform presentation attacks by showing manipulated videos of legitimate users. These deepfakes can imitate facial expressions, voice patterns, and other biometric traits, making it extremely challenging for biometric authentication systems to differentiate between whatâs real and whatâs not.
The second type, known as an âinjection attack,â is considered more sophisticated and threatening than a standard presentation attack. This occurs when a malicious actor attempts to insert, or âinject,â a deepfake image or video directly into a biometric authentication system, in an effort to fool the system into believing that a fabricated image or video came directly from the deviceâs camera. Similar to presentation attacks, deepfakes enable the creation of highly convincing synthetic biometric data that, when injected into a system, elevates the threat level. But injection attacks are that much more effective because they bypass the camera entirely, making it very hard to ensure that biometric data is genuine at the time of capture.
Fighting AI with AI: Fortifying Fintech Defenses
All of this evolution driven by AI does not mean that biometric authentication has outlived its usefulness. Biometrics are growing increasingly entrenched, with a recent consumer survey finding that almost half of all consumers use biometric authentication âalwaysâ or âoftenâ to access mobile apps. Within fintech specifically, the use of biometric authentication has been growing rapidly in recent years, and it has become an increasingly popular method for verifying identity.
The answer lies in fortifying our defenses and fighting AI with AI by using technologies fused with deepfake and presentation attack detection (PAD) algorithms. One such technology includes liveness detection, which works by determining whether a biometric sample is coming from a live person or a spoof. As noted, certain types of injection attacks can emulate camera capture with non-live digital imagery in a way that can defeat some liveness detection measures. Moreover, increasingly sophisticated deepfakes across the spectrum of attacks (both injection and presentation) can certainly pose a threat to liveness detection.
However, liveness detection remains one of the most effective and sophisticated ways to combat such attacks. It works in various ways, depending on the biometric modality being used, such as face, fingerprints, or voice. When it comes to facial recognition, liveness detection may include âpassiveâ forms that run in the background of a biometric authentication process and donât require user input, such as a system that scans the userâs face for natural movements like blinking. âActiveâ forms of liveness detection, which involve user input, may instruct the user to blink, smile, or nod their head. Genuine users will respond with natural, involuntary movements that can be detected, whereas static images or videos cannot replicate these movements.
On a more advanced level, liveness detection for face may include a 3D liveness check to combat 2D spoofing attempts. 3D facial recognition can use depth perception to collect more information about facial expressions and subtle changes, making it harder for fraudsters to bypass security. When it comes to voice recognition, new algorithmic tools can identify synthesized voices within milliseconds by detecting specific spectral artifacts inaudible to the human ear. Usually, such artifacts are left by speech conversion and use of text-to-speech generators.
Finally, liveness detection for fingerprinting uses advanced techniques like texture analysis, which involves examining the fine details and textures of the subjectâs skin or fingerprint. Genuine skin will exhibit unique features and perspiration patterns that are difficult to replicate with a photo or synthetic material. Of course, requiring multiple biometric inputs, such as any combination of facial images, fingerprints, voice or iris (aka multimodal biometrics), combined with liveness detection, is one of the most secure ways to use biometric authentication. A highly specialized attacker may be able to fool one biometric authenticator combined with liveness detection, but itâs doubtful they will be able to fool two or more in the same attack.
Itâs also important to note that biometric technologies and algorithms can be configured for different levels of security to be applied for higher or lower-risk activities. There isnât a âone size fits allâ solution when it comes to defending against fraud and generative AI threats, so choosing the right detection technologies with customized security settings, combined with multimodal biometrics, is essential for the right protection.
The rise of generative AI has introduced new challenges to identity verification, pushing fintechs to rethink their approach to security. As biometric authentication becomes more broadly used, as well as within fintech specifically, the need for enhanced security measures that donât compromise the customer experience is critical. The reality is that fraudsters and security systems are locked in a constant game of cat and mouse. But the good news? Ongoing advancements in biometric and liveness detection technology are keeping fintechs one step ahead, ensuring continuous improvements and better protection.
Stay Ahead of the Financial Curve with Our Latest Fintech News Updates!
Todd Jarvis, Global Head of Partnerships at Aware
Todd Jarvis is the Global Head of Partnerships at Aware. Todd brings 20 years of experience and a proven track record in launching partner programs, building partner ecosystems, and managing high-performing sales teams. Prior to joining Aware, Todd held senior leadership roles at Liferay, Profisee, Oracle, Nomi, and UPS Supply Chain Solutions. In addition, Todd often advises SaaS start-ups in the areas of GTM strategy and international market expansion. Todd received a BS in Business Administration from UNC Chapel Hill and an MBA from Duke University.