HomeCyber SecurityConsultants Discover AI Browsers Can Be Tricked by PromptFix Exploit to Run...

Consultants Discover AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts


Consultants Discover AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts

Cybersecurity researchers have demonstrated a brand new immediate injection approach referred to as PromptFix that methods a generative synthetic intelligence (GenAI) mannequin into finishing up meant actions by embedding the malicious instruction inside a pretend CAPTCHA test on an internet web page.

Described by Guardio Labs an “AI-era tackle the ClickFix rip-off,” the assault approach demonstrates how AI-driven browsers, reminiscent of Perplexity’s Comet, that promise to automate mundane duties like looking for gadgets on-line or dealing with emails on behalf of customers might be deceived into interacting with phishing touchdown pages or fraudulent lookalike storefronts with out the human consumer’s information or intervention.

“With PromptFix, the strategy is totally different: We do not attempt to glitch the mannequin into obedience,” Guardio researchers Nati Tal and Shaked Chen stated. “As a substitute, we mislead it utilizing methods borrowed from the human social engineering playbook – interesting on to its core design purpose: to assist its human rapidly, utterly, and with out hesitation.”

This results in a brand new actuality that the corporate calls Scamlexity, a portmanteau of the phrases “rip-off” and “complexity,” the place agentic AI – techniques that may autonomously pursue targets, make selections, and take actions with minimal human supervision – takes scams to a complete new degree.

Cybersecurity

With AI-powered coding assistants like Lovable confirmed to be inclined to methods like VibeScamming, an attacker can successfully trick the AI mannequin into handing over delicate data or finishing up purchases on lookalike web sites masquerading as Walmart.

All of this may be completed by issuing an instruction so simple as “Purchase me an Apple Watch” after the human lands on the bogus web site in query by one of many a number of strategies, like social media advertisements, spam messages, or SEO (search engine marketing) poisoning.

Scamlexity is “a fancy new period of scams, the place AI comfort collides with a brand new, invisible rip-off floor and people turn out to be the collateral harm,” Guardio stated.

The cybersecurity firm stated it ran the take a look at a number of instances on Comet, with the browser solely stopping often and asking the human consumer to finish the checkout course of manually. However in a number of cases, the browser went all in, including the product to the cart and auto-filling the consumer’s saved handle and bank card particulars with out asking for his or her affirmation on a pretend buying web site.

Comet AI Browser

In the same vein, it has been discovered that asking Comet to test their e-mail messages for any motion gadgets is sufficient to parse spam emails purporting to be from their financial institution, robotically click on on an embedded hyperlink within the message, and enter the login credentials on the phony login web page.

“The consequence: an ideal belief chain gone rogue. By dealing with all the interplay from e-mail to web site, Comet successfully vouched for the phishing web page,” Guardio stated. “The human by no means noticed the suspicious sender handle, by no means hovered over the hyperlink, and by no means had the possibility to query the area.”

That is not all. As immediate injections proceed to plague AI techniques in methods direct and oblique, AI Browsers will even must cope with hidden prompts hid inside an internet web page that is invisible to the human consumer, however might be parsed by the AI mannequin to set off unintended actions.

This so-called PromptFix assault is designed to persuade the AI mannequin to click on on invisible buttons in an internet web page to bypass CAPTCHA checks and obtain malicious payloads with none involvement on the a part of the human consumer, leading to a drive-by obtain assault.

“PromptFix works solely on Comet (which actually features as an AI Agent) and, for that matter, additionally on ChatGPT’s Agent Mode, the place we efficiently obtained it to click on the button or perform actions as instructed,” Guardio informed The Hacker Information. “The distinction is that in ChatGPT’s case, the downloaded file lands inside its digital surroundings, circuitously in your pc, since the whole lot nonetheless runs in a sandboxed setup.”

The findings present the necessity for AI techniques to transcend reactive defenses to anticipate, detect, and neutralize these assaults by constructing sturdy guardrails for phishing detection, URL repute checks, area spoofing, and malicious information.

The event additionally comes as adversaries are more and more leaning on GenAI platforms like web site builders and writing assistants to craft life like phishing content material, clone trusted manufacturers, and automate large-scale deployment utilizing companies like low-code web site builders, per Palo Alto Networks Unit 42.

What’s extra, AI coding assistants can inadvertently expose proprietary code or delicate mental property, creating potential entry factors for focused assaults, the corporate added.

Identity Security Risk Assessment

Enterprise safety agency Proofpoint stated it has noticed “quite a few campaigns leveraging Lovable companies to distribute multi-factor authentication (MFA) phishing kits like Tycoon, malware reminiscent of cryptocurrency pockets drainers or malware loaders, and phishing kits concentrating on bank card and private data.”

The counterfeit web sites created utilizing Lovable result in CAPTCHA checks that, when solved, redirect to a Microsoft-branded credential phishing web page. Different web sites have been discovered to impersonate delivery and logistics companies like UPS to dupe victims into getting into their private and monetary data, or make them pages that obtain distant entry trojans like zgRAT.

Lovable URLs have additionally been abused for funding scams and banking credential phishing, considerably reducing the barrier to entry for cybercrime. Lovable has since taken down the websites and carried out AI-driven safety protections to stop the creation of malicious web sites.

Different campaigns have capitalized on misleading deepfaked content material distributed on YouTube and social media platforms to redirect customers to fraudulent funding websites. These AI buying and selling scams additionally depend on pretend blogs and evaluation websites, typically hosted on platforms like Medium, Blogger, and Pinterest, to create a false sense of legitimacy.

As soon as customers land on these bogus platforms, they’re requested to enroll in a buying and selling account and instructed by way of e-mail by their “account supervisor” to make a small preliminary deposit wherever between $100 and $250 with a purpose to supposedly activate the accounts. The buying and selling platform additionally urges them to supply proof of id for verification and enter their cryptocurrency pockets, bank card, or web banking particulars as fee strategies.

These campaigns, per Group-IB, have focused customers in a number of nations, together with India, the U.Ok., Germany, France, Spain, Belgium, Mexico, Canada, Australia, the Czech Republic, Argentina, Japan, and Turkey. Nevertheless, the fraudulent platforms are inaccessible from IP addresses originating within the U.S. and Israel.

“GenAI enhances menace actors’ operations moderately than changing present assault methodologies,” CrowdStrike stated in its Menace Looking Report for 2025. “Menace actors of all motivations and ability ranges will virtually definitely improve their use of GenAI instruments for social engineering within the near-to mid-term, significantly as these instruments turn out to be extra obtainable, user-friendly, and complex.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments