Imagine this scenario: Suspicious activity begins moving across an enterprise’s network. It is not triggered by a phishing email or a careless user mistake. An automated system is attempting to breach defenses, probing for weaknesses and assessing how much access it can gain before detection.
Seconds later, another system steps in. A defensive AI spots the abnormal pattern, tightens controls and pauses a set of transactions before any money moves or data leaves the company. By the time a human analyst reviews the dashboard, the episode is already over.
This is the new operational reality facing enterprise security teams. The most consequential decisions inside corporate networks are increasingly made not by analysts in a security operations center, but by competing artificial intelligence systems acting autonomously.
Offensive AI agents probe APIs, manipulate retrieval layers and adapt continuously to countermeasures. Defensive agents triage alerts, isolate workflows and remediate vulnerabilities without waiting for human approval. What once required coordinated attackers and days of reconnaissance now unfolds in automated cycles, often before anyone realizes a conflict has begun.
Offensive AI Rewrites the Threat ModelThe World Economic Forum reported that 87% of organizations believe AI-related vulnerabilities are increasing risk across their environments. The threat landscape has shifted from AI as a tool to AI as an operation embedded throughout the attack lifecycle.
Gartner projects that 17% of cyberattacks will employ generative AI by 2027, signaling that AI-driven techniques are moving from experimentation to mainstream threat capability.
The result is compounding scale and variability. Artificial intelligence systems can generate unique attack instances while pursuing the same objective, weakening signature-based detection models that rely on pattern repetition. When each payload or prompt sequence is slightly different, static defenses struggle to keep pace.
The attack surface is also expanding beyond traditional endpoints. Microsoft researchers have highlighted how AI integrations themselves can become entry points, particularly through indirect prompt injection. In these scenarios, malicious instructions are embedded in content that enterprise AI systems later ingest, redirecting agent behavior without breaching hardened infrastructure.
Defense Without Human InterventionIn response, enterprises and investors are shifting toward autonomous remediation. Bain Capital Ventures and Greylock led a $42 million Series A in Cogent Security, betting that AI agents can compress the gap between vulnerability detection and resolution. The scale of the backlog illustrates the urgency. More than 48,000 new common vulnerabilities and exposures were reported in 2025, per TechTarget, a 162% increase from five years earlier, with attackers often probing new disclosures within minutes.
Cogent’s model reflects a broader architectural change. Rather than replacing existing tools, it aggregates signals from scanners, asset inventories and cloud security platforms, then uses AI to prioritize and trigger remediation workflows automatically through ticketing and patching systems.
“Security teams are drowning in coordination work, chasing down system owners, writing tickets, proving fixes happened,” Cogent CEO Vineet Edupuganti told Fortune. The company says customers are resolving their most serious vulnerabilities 97% faster using autonomous workflows.
In optimal scenarios, defensive agents remove the need for human intervention on a specific class of vulnerability. In others, they compress triage and coordination, so engineers focus on higher-order judgment. The common thread is speed. Human-speed remediation is no longer sufficient when AI-driven attackers operate in continuous loops.
Data quality remains a constraint. Behavioral detection and anomaly classification depend on high-fidelity telemetry and clean baselines. Defensive systems trained on incomplete or noisy data risk generating excessive false positives or missing novel attack paths entirely.
At the same time, attackers are increasingly deploying fraudulent AI assistants designed to impersonate legitimate tools and harvest sensitive user information. As PYMNTS reported, these malicious assistants can quietly collect credentials and financial data by exploiting user trust in AI interfaces, reinforcing the need for enterprises to secure not just their networks, but the AI agents themselves.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post AI vs AI: Defense Without Humans in the Loop appeared first on PYMNTS.com.