FinTech Interview with Todd Jarvis, Global Head of Partnerships at Aware

FTB News DeskMay 27, 202527 min

Todd Jarvis of Aware explores deepfakes, biometrics, and the future of fraud prevention in this insightful FinTech interview.

https://fintecbuzz.com/wp-content/uploads/2025/05/todd.jpg
Todd Jarvis, Global Head of Partnerships at Aware

Todd Jarvis is the Global Head of Partnerships at Aware. Todd brings 20 years of experience and a proven track record in launching partner programs, building partner ecosystems, and managing high-performing sales teams. Prior to joining Aware, Todd held senior leadership roles at Liferay, Profisee, Oracle, Nomi, and UPS Supply Chain Solutions. In addition, Todd often advises SaaS start-ups in the areas of GTM strategy and international market expansion. Todd received a BS in Business Administration from UNC Chapel Hill and an MBA from Duke University.

Welcome, Todd. With your extensive experience in building global partnerships and partner ecosystems, how has your background shaped your perspective on biometric security and fraud prevention?
Building a global partner ecosystem for Aware has shown me just how universal and expansive the problem of identity theft and fraud is, across all industries and geographies.
As an example, fraud losses in card payments are continuing to rise worldwide, with Nilson predicting more than $400 billion in global losses over the next ten years. The U.S. is expected to be particularly hard hit, because to date, American merchants and card issuers haven’t fully made use of the strictest fraud-fighting technologies, like biometrics.

Deepfake technology and presentation attacks have advanced significantly in recent years, particularly within financial services. How have these threats evolved, and what emerging patterns are most concerning?
It used to be that deepfake technology and presentation attacks were rather primitive in nature – for example, fraudsters presenting printed photos, digital images or videos of someone else’s face directly to a camera. In recent years, these attacks have grown significantly more sophisticated, with one example being injection attacks, which have also become more widespread. This occurs when a nefarious actor ‘injects’ a deepfake image or video into a biometric authentication system to fool the system into believing that the fabricated image or video came directly through the device’s camera.

In what ways do deepfakes exploit vulnerabilities in biometric authentication systems? Are certain biometric modalities—such as facial recognition or voice authentication—more susceptible than others?
Today, generative AI can create highly realistic deepfakes which can imitate facial expressions, voice patterns and other biometric traits, making it very hard for biometric authentication systems to differentiate between what’s real and what’s not. Also, when it comes to injection attacks, the injected object bypasses the camera entirely, making it challenging to ensure that biometric data is genuine at the point of capture.

I would say that face and voice recognition are more susceptible to deepfakes than other biometric modalities, due to the ready availability of photos and videos on social media. Virtually anyone can pull down a photo or video of someone online and use it to create a deepfake. “Amateur” deepfakes, known as cheapfakes, can be created with simple tools like Adobe, and any person who has taken basic computer animation classes can create them. They are essentially the equivalent of low-budget special effects, and it’s very easy to produce them with limited skills.

What are the most effective technological approaches for detecting and mitigating deepfake-based attacks on biometric security? Are AI-driven detection mechanisms evolving quickly enough to counteract generative AI advancements?
One of the most effective approaches for detecting and mitigating deepfake-based attacks is the implementation of passive liveness detection. Unlike active liveness detection—which requires users to perform specific actions like blinking, turning their head, or following on-screen prompts—passive liveness detection works in the background, analyzing biometric data for signs of authenticity without any user interaction.

This passive approach is particularly well-suited for countering deepfakes and AI-generated spoofing attacks. Sophisticated generative AI models can now simulate facial movements and expressions that easily pass active liveness checks. But passive liveness systems use advanced technology to detect subtle cues invisible to the human eye—such as light reflection patterns on skin, micro-texture anomalies, or inconsistencies in 3D depth—that often reveal synthetic content. These are areas where even the most advanced deepfakes tend to falter.

As for whether detection mechanisms are evolving quickly enough to keep pace with generative AI—the answer is cautiously optimistic. The best solutions are now powered by machine learning models that are constantly retrained on new types of synthetic media. While the threat landscape is evolving rapidly, so too are defensive systems that detect the telltale signs of manipulation.

Liveness detection is often regarded as a critical safeguard against presentation attacks. How effective is this technology in preventing sophisticated deepfake attempts, and where do existing gaps remain?
Liveness detection has proven to be a very effective approach for detecting and mitigating deepfake-based attacks. However, the increased “realism” of deepfakes, as well as injection attacks which mimic camera capture with non-live digital imagery, can defeat some liveness detection measures.

That said, liveness detection remains one of the most effective and sophisticated ways to combat such attacks. It’s a constant arms race, that’s for sure!

Given the increasing prevalence of deepfake-enabled fraud, should financial institutions look beyond biometric authentication and adopt multi-factor or continuous authentication models?
When it comes to surefire identity verification, biometrics combined with liveness detection are hard to beat. Of course, requiring multiple biometric inputs in a multi-factor authentication (MFA) approach, such as any combination of facial images, fingerprints, voice or iris (aka multimodal biometrics) paired with liveness detection is a virtually bulletproof way to authenticate a person. A highly specialized attacker may be able to fool one biometric authenticator combined with liveness detection, but it’s doubtful they’ll be able to fool two or more in the same attack.

With synthetic identity fraud on the rise, how are global regulatory bodies responding? What emerging compliance frameworks should financial institutions prioritize?
In response to rising rates of synthetic identity fraud, global regulatory bodies are stepping up to enhance consumer protections. The Federal Trade Commission’s IdentityTheft.gov program is one example, and financial services companies themselves are following suit.

In the U.S., credit card companies typically offer zero-liability policies for fraudulent charges—even though 46% of global credit card fraud occurs here. Like many financial services providers, they prioritize minimizing friction in authentication so highly that they’re willing to accept elevated fraud rates as a trade-off.

Initiatives like zero-liability policies are reflective of the growing regulatory trend towards ensuring that financial services companies (including FinTech’s) are held accountable for customers’ fraud-related losses. This is indirectly prompting them to step up their overall cybersecurity and identity verification methods, but they need a unique approach that allows them to satisfy the often-conflicting goals of superior convenience and security.

As generative AI capabilities expand, adversarial AI techniques are becoming more sophisticated in circumventing security measures. How can biometric security providers stay ahead in this ongoing technological arms race?
The most sophisticated deepfakes make use of generative adversarial neural networks (GANs), where two algorithms compete and evolve: one algorithm learns to create deepfakes, while another one learns to detect them. This is where it becomes very difficult to discern between what’s real and what’s not. Biometric authentication providers can stay one step ahead in the arms race by utilizing GANs in their liveness detection product development.
Another way is to offer new, emerging biometric modalities that are harder for generative AI to replicate – for example, palm vein recognition. Unlike facial recognition or fingerprint scans, palm vein biometrics rely on subdermal patterns—unique vein structures beneath the skin—captured using near-infrared light. These internal features are not easily photographed or reproduced, making them exceptionally difficult for generative AI to spoof.
Another powerful layer of defense lies in behavioral biometrics. By analyzing unique patterns in how individuals interact with devices—such as typing rhythm, mouse movement, gait, or touchscreen behavior—behavioral biometrics introduce dynamic and continuous authentication. These behaviors are deeply personal and context-specific, posing a significant challenge for AI to convincingly emulate in real time.

Some industry experts advocate for blockchain-based decentralized identity frameworks as a potential countermeasure against synthetic identity fraud. Do you see this as a viable approach, or are there inherent limitations that financial institutions should consider?
At the highest level, a decentralized identity model challenges the idea that a third-party is required to manage the sensitive data used in authentication. In the decentralized identity model, users authenticate themselves to a neutral third-party only once, with proof of one’s identity then saved in an identity trust fabric (ITF) that may include blockchain technology. This ITF acts as a middleman between a user and all of his/her service providers, handling all identification and access requests. Any data held by the ITF is encrypted and encoded under complex mathematical operations, raising security to levels never seen in human history.

An immutable record of a person’s data being recorded in an ITF or on a blockchain might sound a little scary and risky at first. But this is where the concept of decentralized identifiers, or DIDs, comes in. Traditionally, many digital services have relied on password-based logins, but given how easy it is for passwords to be lost, stolen or hacked, this is a highly insecure approach. Alternatively, multi-factor authentication schemes can increase security, but these can sometimes also add friction that reduces user adoption and productivity. DIDs, on the other hand, securely confirm a true, unfalsifiable digital identity without adding aggravation or inconveniencing users.

There are multiple ways to create and prove this true identity, with biometrics being one notable example. When one’s DID is linked to a unique, physical attribute, the individual can authenticate securely without having to reveal their name or any other identifying information.

We see decentralized identity using biometrics as a very viable approach. One example is crypto-biometrics, where biometrics are used to unlock access to, say, a bank account, without ever leaving the user’s device (i.e., there is no central repository of biometric info). In this scenario, device-based configurations place the biometric functionality onto a person’s device; all biometric matching, template storage and liveness detection happens on the device.

Organizations that understand and capitalize on decentralized identity frameworks will create and benefit from a long-standing competitive advantage.  These companies will reduce the often-heavy compliance burden of dealing with and handling users’ private info. They will also enjoy a higher level of security and information protection themselves, with no central database of client information to hack.

Financial services organizations must ensure robust security while maintaining a seamless user experience. How can they achieve this balance when implementing more stringent biometric protections?
As we’ve noted previously, financial services firms and FinTech’s are very focused on minimizing friction during the authentication process – so much so that they’re often willing to tolerate high rates of fraud and billions of dollars in losses each year, as a cost of doing business. What’s great about biometrics, including MFA biometrics, is that it is so fast, simple and convenient that it exerts minimal impact to the user experience. It is really the ideal solution for organizations striving to achieve the holy grail of an exceptional user experience combined with superior security.

A quote or advice from the author: Data from a wide range of industries (like financial services) indicates a high level of consumer comfort with biometrics. That’s not to say there aren’t challenges ahead in the area of building user trust. Communication will be key to educating users on the comprehensive steps taken to keep their data safe, the convenience of login, and how biometrics will secure online experiences for all.

Stay Ahead of the Financial Curve with Our Latest Fintech News Updates!

FTB News Desk

newOriginal-white-FinTech1-1

We are one of the world’s leading Fintech-based media publication with our content strategized and synthesized to fit right into the expanding ecosystem of Finance professionals. Be it fintech live news, finance press releases, tech articles from Fintech evangelists or interviews from top leaders from global fintech firms, we give the best slice of knowledge topped up with the aptest trends. Our sole mission is to help tech and finance professionals step up with the rapidly emerging Fintech civilization and gain better insights to emerge victorious in every possible way. We adopt a 360-degree approach in order to cater to present a holistic picture of the fintech arena.

Our Publications



FintecBuzz, 2025 © All Rights Reserved