'A hard truth for the AI era: dont assume AI tools are secure by default': OpenAI patches flaw allowing silent data leakage from ChatGPT conversations without users ever knowing
Date:
Tue, 31 Mar 2026 17:25:00 +0000
Description:
What happens when researchers think outside the box? Data gets exfiltrated through DNS.
FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Check Point Research found ChatGPT flaw enabling silent data exfiltration via DNS abuse and prompt injection Vulnerability allowed attackers to bypass guardrails and steal sensitive user data through covert domain queries OpenAI patched issue on Feb 20, 2026, marking second major fix that week after Codex command injection flaw OpenAI has addressed a vulnerability in ChatGPT which allowed threat actors to silently exfiltrate sensitive data from their targets.
The vulnerability was discovered by security experts from Check Point
Research (CPR), who warned the bug combined old-fashioned prompt injections with a bypass of built-in guardrails, noting, AI tools should not be assumed secure by default. Nowadays, most people are quick to share highly sensitive data with ChatGPT - medical conditions, contracts, payment slips, screenshots of conversations with partners, spouses, and more. They assume the
information is secure because it cannot be pulled from the tool without their knowledge or consent. Article continues below You may like This 'ZombieAgent' zero click vulnerability allows for silent account takeover - here's what we know Security experts discover critical flaw in OpenAI's Codex able to compromise entire organizations Three high-risk AI vulnerabilities discovered in Claude.ai end-to-end attack chain exfiltrates sensitive info without user knowing DNS traffic is not risky behavior In theory, that is correct. The
data can be exfiltrated either through HTTP or external APIs, and both of these can be spotted, or at least tracked. However, CPR was thinking outside the box and found an entirely new way to pull the info - through DNS .
While direct internet access was blocked as intended, DNS resolution remained available as part of normal system operation, they explained. DNS is
typically treated as harmless infrastructureused to resolve domain names, not to transmit data. However, DNS can be abused as a covert transport mechanism by encoding information into domain queries.
Since DNS activity is not labeled as outbound data sharing, ChatGPT does not prompt any approval dialogs, does not display any warnings, and does not recognize the behavior as inherently risky.
This created a blind spot. The platform assumed the environment was isolated. The model assumed it was operating entirely within ChatGPT. And users assumed their data could not leave without consent, CPR said. All three assumptions were reasonableand all three were incomplete. This is a critical takeaway for security teams: AI guardrails often focus on policy and intent, while attackers exploit infrastructure and behavior." Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
To kickstart the attack, ChatGPT still needs to be prompted, so the initial trigger still needs to be pulled. That can be done in a myriad of ways, though, by injecting a malicious prompt in an email, a PDF document , or through a website.
Still, there are other methods of abusing this flaw even without GPT accidentally acting on a smuggled prompt, and that it - via custom GPTs.
For example, a hacking group can build a custom GPT to act as a personal doctor. Victims using it would upload lab results with personal information and ask for advice and would get confirmation that their data is not being shared. What to read next 'AI assistants are no longer just productivity tools; they are becoming part of the infrastructure that malware can abuse': Experts warn Copilot and Grok can be hijacked to spread malware 'A human-chosen password doesn't stand a chance': OpenClaw has yet another major security flaw here's what we know about "ClawJacked" This WebUI
vulnerability allows remote code execution - here's how to stay safe
But in reality, a server under the attackers control would be getting all of the uploaded files. To make matters worse, GPT doesnt even need to upload entire documents - it can only exfiltrate the essentials, making the process leaner, faster, and more streamlined.
Luckily for everyone, CPR discovered this vulnerability before it was exploited in the wild. It responsibly disclosed it to OpenAI, which deployed
a full fix on February 20, 2026. Patching ChatGPT and Codex Patching ChatGPT bugs (Image credit: Shutterstock/SomYuZu) This is the second major vulnerability that OpenAI had to address - this week. Earlier today,
TechRadar Pro reported about OpenAIs ChatGPT Codex carrying a critical
command injection vulnerability that allowed threat actors to steal sensitive GitHub authentication tokens.
OpenAI thus also fixed a flaw that stems from the way Codex processes branch names during task creation. The tool allowed a malicious actor to manipulate the branch parameter and inject arbitrary shell commands while setting up the environment. These commands could run any code within the container,
including malicious ones. Researchers Phantom Labs said they were able to
pull GitHub OAuth tokens this way, gaining access to a theoretical
third-party project, and using the tokens to move laterally within GitHub.
The best antivirus for all budgets Our top picks, based on real-world testing and comparisons
Read our full guide to the best antivirus 1. Best overall: Bitdefender Total Security 2. Best for families: Norton 360 with LifeLock 3. Best for mobile: McAfee Mobile Security Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
======================================================================
Link to news story:
https://www.techradar.com/pro/security/a-hard-truth-for-the-ai-era-dont-assume -ai-tools-are-secure-by-default-openai-patches-flaw-allowing-silent-data-leaka ge-from-chatgpt-conversations-without-users-ever-knowing
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)