Skip to main content
Skip to content
World Today News
  • Home
  • News
  • World
  • Sport
  • Entertainment
  • Business
  • Health
  • Technology
Menu
  • Home
  • News
  • World
  • Sport
  • Entertainment
  • Business
  • Health
  • Technology

ChatGPT data leak: DNS side channel flaw exposed user data | The Register

March 31, 2026 Rachel Kim – Technology Editor Technology

OpenAI Patched DNS Smuggling in ChatGPT, But the Sandbox Leaks Remain

OpenAI claims its code execution environment is airtight. Check Point Research proved otherwise. In February 2026, a single malicious prompt bypassed outbound restrictions, exfiltrating sensitive data via DNS queries before the vendor deployed a hotfix. This isn’t just a bug; it’s a architectural blind spot in sandboxed LLM runtimes.

  • The Tech TL;DR:
    • Vulnerability: DNS tunneling allowed data exfiltration from ChatGPT’s sandboxed code interpreter despite outbound HTTP blocks.
    • Status: Patched by OpenAI on February 20, 2026; legacy sessions may remain exposed.
    • Impact: High risk for HIPAA/GDPR compliance when processing PII through AI analysis tools.

Sandboxing is the first line of defense for any cloud-native application handling user code. OpenAI’s documentation explicitly states the ChatGPT code execution environment cannot generate outbound network requests directly. That assertion held water until researchers demonstrated that the Domain Name System (DNS) resolver remained an open conduit. While HTTP/HTTPS egress was blocked, the container still needed to resolve domain names for internal logging, and telemetry. Attackers leveraged this necessity to encode data into subdomain queries, effectively tunneling information out of the walled garden.

The mechanics are straightforward but devastating. By forcing the LLM to generate code that triggers a DNS lookup for a attacker-controlled domain, sensitive strings uploaded by the user—like laboratory results or financial logs—are converted into hex-encoded subdomains. The authoritative nameserver for that domain captures the query, reconstructing the leaked data without ever establishing a TCP connection that traditional firewalls would flag. This technique bypasses standard egress filtering since DNS traffic on port 53 is rarely blocked entirely within compute clusters.

Enterprise adoption of generative AI scales rapidly, often outpacing security governance. When a corporate AI service leaks data this way, it triggers immediate compliance failures. A HIPAA breach via DNS smuggling isn’t hypothetical; it’s a vector that cybersecurity auditors are now prioritizing in Q2 2026 assessments. Organizations relying on third-party AI APIs must assume the sandbox is permeable until proven otherwise through rigorous penetration testing.

Architectural Blind Spots in LLM Runtimes

The vulnerability highlights a broader issue in container security: the assumption that blocking high-level protocols (HTTP) secures the network layer. DNS is fundamental to infrastructure operation, making it demanding to disable without breaking core functionality. According to the OWASP Top 10 for Large Language Model Applications, improper output handling and indirect prompt injection remain critical risks. OpenAI’s fix involved tightening the resolver logic within the container to prevent arbitrary query formation, but the incident underscores the need for deeper network segmentation.

Security architects need to verify that no side channels exist between the model’s reasoning engine and the underlying OS network stack. This requires more than vendor assurances. It demands independent validation from risk assessment providers who specialize in AI supply chain security. The AI Cyber Authority network notes that federal regulators are expanding scrutiny on these exact intersection points between AI utility and data sovereignty.

“The assumption that a sandboxed environment is isolated by default is dangerous. DNS resolution is often overlooked in egress filtering policies, creating a covert channel that bypasses application-layer firewalls.” — Senior Security Architect, AI Cyber Authority Network

Developers integrating ChatGPT APIs should implement client-side sanitization before data ever reaches the model. Treat the AI endpoint as untrusted. If you must upload sensitive files, strip metadata and encrypt payloads where possible, though this limits the model’s utility. For those managing internal AI deployments, reviewing NIST AI Risk Management Framework guidelines is mandatory to align with emerging federal standards.

Mitigation and Detection Strategies

Detecting DNS exfiltration requires monitoring query entropy and length. Normal DNS requests are short and predictable. Exfiltration queries are long, random-looking strings. Security operations centers (SOCs) should deploy DNS firewall rules that flag unusually long subdomain labels or high-frequency queries to unknown domains. Below is a sample tcpdump filter logic used to identify suspicious DNS traffic patterns indicative of tunneling:

# Monitor DNS queries for high-entropy subdomains tcpdump -i eth0 -n 'udp port 53 and dst port 53' | grep -E '([a-zA-Z0-9]{30,})' # Example curl command to test DNS resolution latency (benchmarking) curl -w "@format.txt" -o /dev/null -s "https://api.openai.com/v1/chat/completions" 

While OpenAI patched this specific vector on February 20, 2026, the underlying architectural tension remains. As AI models gain more tool-use capabilities, the attack surface expands. The code execution environment is no longer just a calculator; it’s an agent with potential network reach. This shifts the burden to enterprise IT to enforce zero-trust networking around AI services. Companies should engage AI security specialists to audit their integration points, ensuring that no hidden outbound channels exist in custom GPT configurations.

Compliance and Liability in the Post-Patch Era

Patching the vulnerability does not erase the liability of prior exposure. If data leaked before February 2026, organizations may face regulatory action. The Check Point Research blog details three proof-of-concept attacks, one involving a health analyst GPT. This scenario directly implicates HIPAA regulations. Legal teams must review logs for any unexplained DNS resolutions during the vulnerability window.

Attack Vector Protocol OpenAI Status Enterprise Mitigation
HTTP/HTTPS Egress TCP 80/443 Blocked Standard Firewall Rules
DNS Tunneling UDP 53 Patched (Feb 2026) DNS Firewalling / Query Monitoring
ICMP Exfiltration Protocol 1 Unknown Block ICMP at Container Level

The table above outlines the common egress vectors in containerized AI environments. While HTTP is commonly blocked, DNS and ICMP often slip through policy gaps. Security teams must validate that their container runtime security policies extend to the network layer, not just the application layer. Reference the RFC 1035 DNS Specification when configuring deep packet inspection tools to ensure legitimate traffic isn’t inadvertently dropped while malicious tunnels are caught.

Reliance on vendor security claims is insufficient for regulated industries. The ChatGPT DNS flaw proves that even “walled gardens” have cracks. CTOs must treat AI integrations as high-risk components requiring continuous monitoring. The directory lists vetted firms capable of performing these specific AI audit trails. Don’t wait for the next zero-day to validate your security posture.


The trajectory of AI security is moving from prompt injection defense to infrastructure hardening. As models turn into more autonomous, the network perimeter dissolves further. Organizations that fail to audit their AI supply chain now will face compounded liabilities later. Secure your endpoints, validate your vendors, and assume breach.

Disclaimer: The technical analyses and security protocols detailed in this article are for informational purposes only. Always consult with certified IT and cybersecurity professionals before altering enterprise networks or handling sensitive data.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

World Today News

NewsList Directory is a comprehensive directory of news sources, media outlets, and publications worldwide. Discover trusted journalism from around the globe.

Quick Links

  • Privacy Policy
  • About Us
  • Accessibility statement
  • California Privacy Notice (CCPA/CPRA)
  • Contact
  • Cookie Policy
  • Disclaimer
  • DMCA Policy
  • Do not sell my info
  • EDITORIAL TEAM
  • Terms & Conditions

Browse by Location

  • GB
  • NZ
  • US

Connect With Us

© 2026 World Today News. All rights reserved. Your trusted global news source directory.

Privacy Policy Terms of Service