AI Turning Into the New Command‑and‑Control Layer

Check Point Research has released a new research analysis—focusing on AI assistants as covert command‑and‑control channels and AI‑Driven (AID) malware, a turning point in modern cyber risk, with implications for every industry accelerating AI adoption. 

Check Point Research demonstrated that AI assistants such as Microsoft Copilot and Grok, which support web-browsing or URL-fetch capabilities, can be abused as covert C2 proxies, allowing malware to exchange data with attacker infrastructure while blending seamlessly into normal enterprise AI traffic.  Researchers also showed how malware is transitioning from static, signature-based logic to AI-Driven implants, capable of making real-time decisions—including triaging victims, prioritizing files, selecting commands, avoiding sandboxes, and adapting tactics mid-operation.

Together, these findings reveal a future where AI is no longer assisting the attacker—it is part of the attacker’s infrastructure.

The Key Highlights

• AI Assistants Can Be Misused as Stealthy C2 Relays
Attackers can prompt AI assistants to fetch attacker-controlled URLs and return embedded commands—without any API keys or user accounts—allowing malware to hide communications inside legitimate AI traffic.

• Anonymous AI Web Access Removes Traditional Kill Switches
Since no accounts or keys are needed, defenders cannot rely on conventional takedown mechanisms; traffic appears identical to everyday AI usage.

• Malware Is Becoming Adaptive and Prompt‑Driven, using AI as remote brain
Future AID malware can offload decision-making to AI models, adjusting behaviour dynamically across infected hosts and receiving dynamic guidance during an intrusion —making attacks harder to predict, detect, and analyze.

• AI Will Accelerate Targeting, Data Theft, and Ransomware Operations
Instead of encrypting everything, AI‑driven ransomware may soon identify only high‑value assets and act with minimal observable activity—shrinking detection windows from minutes to seconds.

• AI Traffic Is Becoming a Blind Spot for Enterprises
As organizations integrate AI into everyday workflows, attackers increasingly rely on the same services, knowing this traffic is allowed, trusted, and seldom inspected. 

The Impact: What This Means for Organizations in an AI‑Driven Threat Landscape
As enterprises increase their reliance on AI tools, these same services are rapidly becoming part of the attack surface, blending into legitimate traffic —and in some cases, become part of the attack infrastructure itself. AI‑enabled communications are often trusted, widely allowed, and rarely inspected, giving attackers an opportunity to hide inside everyday AI traffic in ways that traditional detection cannot easily distinguish.

For organizations, this means that AI domains must now be treated as high‑value and high‑risk egress points, with AI traffic inspected and contextualised, rather than seen as default ‘safe’ traffic, with the same scrutiny applied to them as any other critical communication channel.

At the same time, the evolution toward AI‑Driven malware fundamentally changes how defenders must think about cyber threats. Because these implants can rely on AI models to triage hosts, select targets, adjust behaviour, and minimize observable activity, defensive controls built around signatures, volume‑based thresholds, or sandbox triggers become far less effective especially as malware behaviour becomes adaptive and context-aware.

In this new reality, AI security and enterprise security are inseparable, and organizations must ensure that accelerating AI adoption does not inadvertently create blind spots attackers can exploit. 

Eli Smadja, Head of Research, Check Point Research, said, “As AI becomes woven into everyday business workflows, it also becomes woven into attacker workflows. Threat actors no longer need sophisticated infrastructure—just access to widely trusted AI services. To stay safe, organizations must monitor AI traffic with the same scrutiny as any other high-risk channel, enforce tighter controls around AI-powered features, and adopt security measures that understand not only what AI is doing, but why, leveraging agentic AI capabilities to inspect and contextualize traffic to and from AI services and block malicious communication attempts before they can be abused as covert channels.”