DTEX Warns: AI Agents on Telegram and WhatsApp Can Stealthily Access Files, Expose Credentials, and Exfiltrate Data from Endpoints
DTEX Warns of AI Agent Data Exfiltration via Telegram and WhatsApp: A Critical Endpoint Risk
On April 23, 2026, DTEX Systems issued a security advisory detailing how malicious AI agents operating through consumer messaging platforms like Telegram and WhatsApp can bypass traditional endpoint detection and response (EDR) tools to silently access local files, harvest credentials and exfiltrate sensitive data via encrypted chat channels. The attack leverages legitimate API integrations—often enabled by users for productivity—to establish covert command-and-control (C2) tunnels that mimic benign messaging traffic. Unlike malware that triggers heuristic alerts, these AI-driven agents use natural language processing (NLP) to blend in, making detection exceptionally difficult without behavioral baselining. As enterprises increasingly permit AI-assisted workflows on unmanaged devices, this vector represents a growing blind spot in zero-trust architectures.
The Tech TL. DR:
- AI agents via Telegram/WhatsApp can exfiltrate data at up to 1.2 MB/s using steganographic techniques in message metadata.
- Detection requires monitoring for anomalous API call patterns, not just file hashes or network signatures.
- Enterprises should enforce strict OAuth scope limits and deploy user entity behavior analytics (UEBA) on messaging integrations.
The core issue lies in the over-permissioning of AI agent integrations. When users grant an AI assistant access to “read and send messages” via Telegram’s Bot API or WhatsApp Cloud API, they often inadvertently allow broad file access and external webhook capabilities. DTEX’s research shows that a compromised agent can use telegram.sendDocument or whatsapp.media.upload endpoints to exfiltrate files encrypted within innocuous-looking media payloads—achieving effective throughput of 900 KBps to 1.2 MBps on 4G/5G connections, depending on compression and steganographic layering. This bypasses traditional data loss prevention (DLP) tools that inspect plaintext or known malicious signatures, as the exfiltrated data is embedded in LSB (least significant bit) layers of images or audio files transmitted via standard MMS protocols.
“The real danger isn’t the AI model itself—it’s the implicit trust we place in consumer-grade APIs. When an agent can call
files.geton a user’s OneDrive via Microsoft Graph because it was granted ‘full access’ for convenience, you’ve built a backdoor with a smiley face.”
From an architectural standpoint, these attacks succeed due to a mismatch between user intent and permission granularity. Most consumer messaging platforms offer binary scopes: either no access or full access to files, contacts, and external services. There is no middle ground for “read-only document access within approved folders” or “time-bound API tokens.” This forces enterprises into an untenable choice: block all AI integrations (hurting productivity) or accept unquantifiable risk. The situation is exacerbated by the rise of local LLMs running on NPUs—such as Qualcomm’s Hexagon or Apple’s Neural Engine—which enable offline AI agents that never touch corporate servers, making network-based monitoring ineffective.
To detect such threats, security teams must shift from signature-based to behavior-based monitoring. A practical starting point is auditing OAuth token usage via cloud access security brokers (CASBs). For example, a sudden spike in POST /v1/bots/{token}/sendDocument calls from a single user agent outside business hours—especially when correlated with anomalous files.download activity on SharePoint or Google Drive—should trigger automated investigation. Below is a sample Splunk query to detect potential exfiltration via Telegram Bot API:

index=proxy sourcetype=telegram_api | stats count by user, dest_ip, http_method, uri_path | where count > 50 AND uri_path="*/sendDocument" AND _time > relative_time(now(), "-1h") | lookup user_assets user OUTPUT asset_owner, department | where department IN ("Finance", "Legal", "Engineering")
This query identifies users in high-risk departments making excessive document upload requests—a common precursor to data theft. For WhatsApp, analogous monitoring of the Cloud API’s /media/upload endpoint is essential, particularly when file types deviate from expected usage (e.g., uploading .xlsx or .pem files via a customer service bot).
The funding and transparency behind the tools enabling this risk are also worth noting. Many popular AI agent platforms—such as those built on LangChain or LlamaIndex—are open-source projects maintained by distributed contributors, with core funding often coming from venture capital (e.g., LangChain’s Series B led by Sequoia Capital). However, the integrations with Telegram and WhatsApp are typically handled by third-party middleware or low-code automation tools like Zapier or Make.com, which may not enforce least-privilege principles by default. This creates a supply chain risk where the AI agent itself is benign, but its deployment pathway introduces excessive permissions.
For organizations seeking immediate mitigation, the path forward involves three technical controls: First, enforce conditional access policies that restrict AI agent integrations to managed devices only. Second, deploy API gateway rules that inspect and sanitize webhook payloads from messaging platforms for steganographic content—tools like AWS WAF with custom rule groups or Cloudflare’s API Shield can be tuned for this. Third, implement just-in-time (JIT) access for file systems via solutions like Microsoft Entra ID Governance or HashiCorp Vault, ensuring that even if an agent is compromised, it cannot retain persistent access to sensitive directories.
As enterprise adoption of AI agents scales, the attack surface will only expand. The next frontier involves multimodal agents that can interpret screen content via optical character recognition (OCR) and exfiltrate data through seemingly innocuous chat messages—turning every pixel into a potential leak. Organizations must treat consumer messaging platforms not as communication tools, but as untrusted endpoints requiring the same rigor as laptops or servers.
“We’re seeing CISOs now classify WhatsApp and Telegram as ‘shadow IT by default’—not because employees are misusing them, but because the platform’s API design assumes trust where none should exist.”
In the meantime, IT teams should engage specialists who understand both API security and behavioral analytics. Firms like cybersecurity auditors and penetration testers can conduct red team exercises focused on AI agent abuse scenarios, even as managed service providers (MSPs) with expertise in UEBA and CASB deployment can help establish baselines for normal messaging API usage. For custom integration hardening, software development agencies experienced in OAuth 2.0 scope minimization and API gateway policy engineering are essential partners.
The Editorial Kicker: The real vulnerability isn’t in the LLMs or the APIs—it’s in the false dichotomy between security, and usability. Until platforms adopt fine-grained, context-aware permissions (consider: “allow this agent to read PDFs from ~/Projects but not ~/Documents”), we’ll keep building AI agents that are as powerful as they are perilous. The next wave of innovation must arrive not from bigger models, but from smarter guardrails.
*Disclaimer: The technical analyses and security protocols detailed in this article are for informational purposes only. Always consult with certified IT and cybersecurity professionals before altering enterprise networks or handling sensitive data.*
