Risk researchers found the primary AI-powered ransomware, known as PromptLock, that makes use of Lua scripts to steal and encrypt information on Home windows, macOS, and Linux programs.
The malware makes use of OpenAI’s gpt-oss:20b mannequin by way of the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts.
How PromptLock works
In response to ESET researchers, PromptLock is written in Golang and makes use of the Ollama API to entry the gpt-oss:20b massive language mannequin. The LLM is hosted on a distant server, to which the risk actor connects by way of a proxy tunnel.
The malware makes use of hard-coded prompts that instruct the mannequin to generate malicious Lua scripts dynamically, together with for native filesystem enumeration, goal information inspection, information exfiltration, and file encryption.

Supply: ESET
The researchers additionally point out information destruction performance however the function has not been carried out.
For file encryption, PromptLock makes use of the light-weight SPECK 128-bit algorithm, a relatively uncommon selection for ransomware, thought of appropriate primarily for RFID functions.

Supply: ESET
Only a demo for now
ESET instructed BleepingComputer that PromptLock has not appeared of their telemetry, however relatively they found it on VirusTotal.
The cybersecurity firm believes that PromptLock is a proof-of-concept or work in progress, and never an energetic ransomware within the wild.
Moreover, a number of indicators point out that this can be a idea software relatively than an actual risk at presen. Some clues suggesting that embody utilizing a weak encryption cipher (SPECK 128-bit), a hard-coded Bitcoin deal with linked to Satoshi Nakamoto, and the truth that the information destruction functionality has not been carried out.
After ESET printed particulars about PromptLock, a safety researcher claimed that that the malware was their undertaking and one way or the other it bought leaked.
Nonetheless, the looks of PromptLock holds significance in demonstrating that AIs could be weaponized in malware workflows, providing cross-platform capabilities, operational flexibility, evasion, and reducing the bar for entry into cybercrime.
This evolution turned evident in July, when Ukraine’s CERT reported the invention of the LameHug malware, an LLM-powered software that makes use of Hugging Face API and Alibaba’s Qwen-2.5-Coder-32B to generate Home windows shell instructions on the fly.
LameHug, believed to be deployed by Russian hackers of the APT28 group, leverages API calls as a substitute of PromptLock’s proxying. Each implementations obtain the identical sensible consequence, although the latter is extra complicated and dangerous.