The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

Predictive AI Bridges the Security Response Gap in Automated Attacks

DATE POSTED:January 15, 2026

According to the World Economic Forum’s Cyber Risk in 2026 outlook, artificial intelligence (AI) is expected to be the most consequential factor shaping cybersecurity strategies this year, cited by 94% of surveyed executives as a force multiplier for both defense and offense.

The report, released Monday (Jan. 12), highlights how generative AI technologies are expanding the attack surface, contributing to unintended data exposure and more complex exploitation tactics that outpace the capacity of purely human-led teams.

AI for Cybercrime Prevention

Cyberdefense has long focused on remediation after losses occur. AI is pushing intervention earlier in the attack cycle by identifying coordinated behavior and emerging risk signals before fraud scales.

As PYMNTS reported, companies are ramping up their use of AI to guard against suspicious activities, even as they face a rising risk from shadow AI, third-party agents and apps that could open the businesses up to cyber risks.

Security firms and financial institutions now use machine learning to correlate activity across multiple systems rather than relying on isolated alerts. Group-IB’s Cyber Fraud Intelligence Platform is one example of this approach. The system analyzes behavioral patterns across participating organizations to identify signs of account takeover, authorized push payment scams, and money-mule activity while schemes are still developing. Instead of waiting for confirmed losses, institutions can flag suspicious behavior based on early indicators such as repeated credential reuse or low-value test transactions.

Fraud prevention increasingly relies on shared intelligence and behavioral analysis rather than static rules. By correlating signals across platforms, institutions can detect coordinated activity that would not appear risky inside a single organization.

AI is also expanding into visual risk detection. Truepic’s shared intelligence platform applies machine learning to analyze images and videos submitted as identity or compliance evidence across multiple organizations. By identifying reused or manipulated visual patterns, the system can flag AI-generated or altered media that might otherwise pass manual review.

AI is also being applied at the identity and session level, where behavioral analytics focus on how a user interacts with a system rather than what credentials they present. Tools like keystroke dynamics analysis, device fingerprinting, session velocity tracking, and behavioral biometrics measure signals such as typing cadence, mouse movement, touchscreen pressure, IP stability, device configuration, and navigation patterns across a session. These signals help security systems distinguish legitimate users from attackers who may already have valid credentials, a scenario that has become more common as AI-generated phishing and credential harvesting improve.

Predictive AI models extend this approach by detecting fraud patterns that emerge before transactions or approvals occur. In documented cases cited by Group-IB, financial institutions used predictive AI to identify more than 1,100 attempted loan applications involving AI-generated or manipulated biometric images, where attackers attempted to bypass identity verification using deepfake photos.

The systems flagged the activity not through document inspection alone, but by identifying inconsistencies across device reuse, session behavior, application timing, and interaction patterns that diverged from legitimate customer behavior. This allowed institutions to stop the applications before approval rather than discovering fraud after disbursement.

Using AI to Disrupt Crime

AI-driven defense is no longer confined to private fraud platforms. Governments are integrating AI directly into cybercrime and economic crime enforcement.

The UAE Ministry of Interior has deployed AI and advanced analytics within its Cybercrime Combating Department to support investigations into digital and financial crimes. Officials say AI systems help analyze large volumes of digital evidence, identify links between cases, and trace the origins of cyber incidents more quickly than manual methods.

At the enterprise level, large technology providers are embedding AI into financial crime and security workflows. Oracle, for example, uses AI-based investigation tools to assist analysts by gathering evidence, connecting related cases, and highlighting higher-risk activity.

Smaller companies are also adopting AI defensively. In the U.S. Midwest, cybersecurity firms deploy AI tools that monitor network traffic, email and user behavior to detect phishing attempts and unauthorized access in real time. These systems focus on early anomaly detection to prevent incidents from escalating.

The growing reliance on AI reflects a simple constraint: Human analysts cannot keep pace with attack volumes generated by automated tools. National security agencies, including the U.K.’s National Cyber Security Centre, warn that AI will continue to increase the speed and effectiveness of cyber threats through at least 2027, particularly in social engineering and fraud.

Enterprise adoption data already reflects this reality. As PYMNTS has reported, 55% of surveyed chief operating officers say they are relying on generative AI-driven solutions to improve cybersecurity management.

The post Predictive AI Bridges the Security Response Gap in Automated Attacks appeared first on PYMNTS.com.