Watch more: What’s Next in Payments With Paysafe’s Richard Swales
Aided by technology, fraudsters attempt to mimic legitimate users in ways that traditional authentication tools were never designed to detect.
As generative technologies evolve, payments providers are confronting a reality in which confirming that a customer is genuine requires more than reviewing documents or matching biometric images.
That shift was the focus of a “What’s Next in Payments” interview with Richard Swales, chief risk and compliance officer at Paysafe. Swales said artificial intelligence presents an operational opportunity and a growing risk because the same tools that help financial institutions automate complex processes are also available to fraud networks.
“AI is a great opportunity for business to modernize and use technology and machine learning to make complex tasks very simple,” he said. “But by the same token, it’s available to everyone. The challenge is that there are always bad actors who will try to utilize those same tools against you.”
Onboarding Becomes Identity BattlegroundThe most exposed point in the identity life cycle often occurs when a customer or merchant first joins a platform. During onboarding, institutions must decide whether an applicant is legitimate before granting access to financial services. That moment has long attracted fraud attempts, but generative AI has made identity fabrication easier and faster.
Criminals frequently attempt to replicate legitimate merchants or consumers to enter payment systems, Swales said.
“I think the challenge is always going to be around onboarding a customer,” he said. “Bad actors will try to replicate a good merchant or a good customer and get access to services.”
Historically, onboarding relied on document verification and biometric checks. Customers uploaded images of government identification or submitted selfie photographs intended to prove physical presence. While those tools remain useful, they are susceptible to manipulation as synthetic identities improve.
“People have been trying to replicate faces and the verification for a long time,” Swales said. “And it’s getting more and more difficult to spot the difference between real and virtual.”
Identity Shifts to BehaviorAs a result, many fraud prevention strategies are evolving toward behavioral intelligence. Instead of focusing solely on documents or biometric signals, institutions analyze how users interact with systems over time.
These signals may include device information, login patterns, geographic indicators and transaction context. By observing how a user typically behaves, systems can establish a baseline of activity and identify anomalies that suggest fraudulent behavior.
“We’re moving beyond an era where I’m looking to stop ‘one thing’ in its own right,” Swales said. “I’m looking for behaviors. It’s how people interact with you that is becoming more relevant.”
The approach reflects a broader shift in how digital identity is understood. Rather than verifying identity once during onboarding and assuming that status remains fixed, institutions are treating identity as something that must be evaluated continuously.
Behavioral context helps systems determine whether activity aligns with a legitimate user’s history or deviates in ways that suggest account compromise.
At the same time, payments providers must preserve the convenience that digital commerce requires. Systems must detect risk without creating unnecessary barriers for legitimate transactions.
“The vast majority of transactions go through completely unencumbered,” Swales said, adding that the challenge lies in maintaining security while avoiding excessive friction.
Agentic AI Adds New Identity QuestionsThe emergence of agentic AI introduces another layer of complexity. As autonomous systems begin to carry out tasks on behalf of users, identity verification must account for situations in which a digital agent rather than a human initiates a transaction.
Such technologies may streamline operations across financial institutions and improve internal processes, Swales said. Yet the same capabilities could also be exploited by automated fraud operations designed to mimic legitimate user activity.
“If we thought that one credential, one use was being managed at a time, we would probably be bypassed,” he said, adding that effective defenses require analyzing multiple signals simultaneously.
In this environment, identity systems must evaluate not only who a user is but also how and why a transaction is being initiated.
Tokenization and Industry CooperationLooking ahead, Swales pointed to tokenization and stronger cryptographic protections as potential components of the next generation of identity security. By replacing sensitive credentials with secure tokens, payment systems can limit the information available for fraudsters to steal or reproduce.
Defending digital identity will require cooperation across the payments ecosystem, he said. Fraud networks operate across platforms and borders, meaning institutions must work with regulators and industry partners to strengthen shared defenses.
“Fraud and bad actors are not a competitive sport,” Swales said. “We should all be together.”
The post Paysafe Sees Onboarding as Fraud’s New Front Line appeared first on PYMNTS.com.