HomeCyber SecurityShadowLeak Zero-Click on Flaw Leaks Gmail Knowledge by way of OpenAI ChatGPT...

ShadowLeak Zero-Click on Flaw Leaks Gmail Knowledge by way of OpenAI ChatGPT Deep Analysis Agent


Sep 20, 2025Ravie LakshmananSynthetic Intelligence / Cloud Safety

ShadowLeak Zero-Click on Flaw Leaks Gmail Knowledge by way of OpenAI ChatGPT Deep Analysis Agent

Cybersecurity researchers have disclosed a zero-click flaw in OpenAI ChatGPT’s Deep Analysis agent that might enable an attacker to leak delicate Gmail inbox information with a single crafted electronic mail with none person motion.

The brand new class of assault has been codenamed ShadowLeak by Radware. Following accountable disclosure on June 18, 2025, the difficulty was addressed by OpenAI in early August.

“The assault makes use of an oblique immediate injection that may be hidden in electronic mail HTML (tiny fonts, white-on-white textual content, format tips) so the person by no means notices the instructions, however the agent nonetheless reads and obeys them,” safety researchers Zvika Babo, Gabi Nakibly, and Maor Uziel stated.

“Not like prior analysis that relied on client-side picture rendering to set off the leak, this assault leaks information instantly from OpenAI’s cloud infrastructure, making it invisible to native or enterprise defenses.”

DFIR Retainer Services

Launched by OpenAI in February 2025, Deep Analysis is an agentic functionality constructed into ChatGPT that conducts multi-step analysis on the web to provide detailed studies. Comparable evaluation options have been added to different common synthetic intelligence (AI) chatbots like Google Gemini and Perplexity over the previous 12 months.

Within the assault detailed by Radware, the risk actor sends a seemingly harmless-looking electronic mail to the sufferer, which accommodates invisible directions utilizing white-on-white textual content or CSS trickery that inform the agent to assemble their private data from different messages current within the inbox and exfiltrate it to an exterior server.

Thus, when the sufferer prompts ChatGPT Deep Analysis to research their Gmail emails, the agent proceeds to parse the oblique immediate injection within the malicious electronic mail and transmit the main points in Base64-encoded format to the attacker utilizing the instrument browser.open().

“We crafted a brand new immediate that explicitly instructed the agent to make use of the browser.open() instrument with the malicious URL,” Radware stated. “Our closing and profitable technique was to instruct the agent to encode the extracted PII into Base64 earlier than appending it to the URL. We framed this motion as a crucial safety measure to guard the information throughout transmission.”

The proof-of-concept (PoC) hinges on customers enabling the Gmail integration, however the assault could be prolonged to any connector that ChatGPT helps, together with Field, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint, successfully broadening the assault floor.

Not like assaults like AgentFlayer and EchoLeak, which happen on the client-side, the exfiltration noticed within the case of ShadowLeak transpires instantly inside OpenAI’s cloud setting, whereas additionally bypassing conventional safety controls. This lack of visibility is the primary side that distinguishes it from different oblique immediate injection vulnerabilities much like it.

ChatGPT Coaxed Into Fixing CAPTCHAs

The disclosure comes as AI safety platform SPLX demonstrated that cleverly worded prompts, coupled with context poisoning, can be utilized to subvert ChatGPT agent’s built-in guardrails and resolve image-based CAPTCHAs designed to show a person is human.

CIS Build Kits

The assault basically includes opening an everyday ChatGPT-4o chat and convincing the big language mannequin (LLM) to provide you with a plan to unravel what’s described to it as a listing of pretend CAPTCHAs. Within the subsequent step, a brand new ChatGPT agent chat is opened and the sooner dialog with the LLM is pasted, stating this was “our earlier dialogue” – successfully inflicting the mannequin to unravel the CAPTCHAs with none resistance.

“The trick was to reframe the CAPTCHA as “faux” and to create a dialog the place the agent had already agreed to proceed. By inheriting that context, it did not see the same old purple flags,” safety researcher Dorian Schultz stated.

“The agent solved not solely easy CAPTCHAs but additionally image-based ones — even adjusting its cursor to imitate human conduct. Attackers may reframe actual controls as ‘faux’ to bypass them, underscoring the necessity for context integrity, reminiscence hygiene, and steady purple teaming.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments