HomeCyber SecurityGoogle AI "Huge Sleep" Stops Exploitation of Vital SQLite Vulnerability Earlier than...

Google AI “Huge Sleep” Stops Exploitation of Vital SQLite Vulnerability Earlier than Hackers Act


Jul 16, 2025Ravie LakshmananAI Safety / Vulnerability

Google AI “Huge Sleep” Stops Exploitation of Vital SQLite Vulnerability Earlier than Hackers Act

Google on Tuesday revealed that its giant language mannequin (LLM)-assisted vulnerability discovery framework found a safety flaw within the SQLite open-source database engine earlier than it may have been exploited within the wild.

The vulnerability, tracked as CVE-2025-6965 (CVSS rating: 7.2), is a reminiscence corruption flaw affecting all variations prior to three.50.2. It was found by Huge Sleep, a man-made intelligence (AI) agent that was launched by Google final 12 months as a part of a collaboration between DeepMind and Google Undertaking Zero.

“An attacker who can inject arbitrary SQL statements into an utility would possibly be capable of trigger an integer overflow leading to learn off the tip of an array,” SQLite undertaking maintainers mentioned in an advisory.

Cybersecurity

The tech large described CVE-2025-6965 as a crucial safety challenge that was “recognized solely to risk actors and was susceptible to being exploited.” Google didn’t reveal who the risk actors have been.

“By way of the mixture of risk intelligence and Huge Sleep, Google was capable of really predict {that a} vulnerability was imminently going for use and we have been capable of reduce it off beforehand,” Kent Walker, President of World Affairs at Google and Alphabet, mentioned.

“We imagine that is the primary time an AI agent has been used to immediately foil efforts to take advantage of a vulnerability within the wild.”

In October 2024, Huge Sleep was behind the invention of one other flaw in SQLite, a stack buffer underflow vulnerability that might have been exploited to lead to a crash or arbitrary code execution.

Coinciding with the event, Google has additionally revealed a white paper to construct safe AI brokers such that they’ve well-defined human controllers, their capabilities are rigorously restricted to keep away from potential rogue actions and delicate knowledge disclosure, and their actions are observable and clear.

“Conventional methods safety approaches (reminiscent of restrictions on agent actions carried out by means of classical software program) lack the contextual consciousness wanted for versatile brokers and may overly limit utility,” Google’s Santiago (Sal) Díaz, Christoph Kern, and Kara Olive mentioned.

“Conversely, purely reasoning-based safety (relying solely on the AI mannequin’s judgment) is inadequate as a result of present LLMs stay vulnerable to manipulations like immediate injection and can’t but provide sufficiently strong ensures.”

To mitigate the important thing dangers related to agent safety, the corporate mentioned it has adopted a hybrid defense-in-depth strategy that mixes the strengths of each conventional, deterministic controls and dynamic, reasoning-based defenses.

Cybersecurity

The thought is to create strong boundaries across the agent’s operational surroundings in order that the chance of dangerous outcomes is considerably mitigated, particularly malicious actions carried out on account of immediate injection.

“This defense-in-depth strategy depends on enforced boundaries across the AI agent’s operational surroundings to stop potential worst-case situations, performing as guardrails even when the agent’s inner reasoning course of turns into compromised or misaligned by subtle assaults or sudden inputs,” Google mentioned.

“This multi-layered strategy acknowledges that neither purely rule-based methods nor purely AI-based judgment are ample on their very own.”

Discovered this text fascinating? Comply with us on Twitter and LinkedIn to learn extra unique content material we put up.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments