Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Subscribe

Why Cybercriminals Are Using Large Language Models to Automate Global Ransomware.

Why Cybercriminals Are Using Large Language Models to Automate Global Ransomware. Why Cybercriminals Are Using Large Language Models to Automate Global Ransomware.
Why Cybercriminals Are Using Large Language Models to Automate Global Ransomware.

The meeting appeared to be quite typical. A multinational corporation’s finance employee joined a Zoom call in January 2024 that seemed to include his CFO and a number of well-known coworkers. The voices were correct. The faces were accurate. Even the background hum of office noise sounded just right. Before anyone knew he was the only real person on the call, he wired $25 million. The CFO was created by AI. The coworkers were created by AI. Recursive neural networks trained on real people he knew had created every face, voice, and pause that sounded natural. The funds had vanished. The meeting had never taken place.

That narrative does not serve as a warning about potential dangers. It describes the situation that existed two years ago. Since then, technology has continued to advance, and those who use it illegally have not stopped.

Key facts — AI-powered cybercrime & ransomware (2024–2026)

Key researchRansomware 3.0″ / “PromptLock” — NYU Tandon School of Engineering (Aug 2025)
Cost per AI-powered attack~$0.70 using commercial API; $0 with open-source models
AI phishing success rate35% of recipients tricked (Arsen cybersecurity simulation)
AI-generated malware variants10,000 variants from one model; 88% evade detection
New malware samples per day~450,000 (AV-TEST Institute)
Vishing attack increase (H2 2024)+442% (CrowdStrike 2025 Global Threat Report)
Deepfake attack frequency (2024)One incident every 5 minutes (Entrust/Onfido)
Largest documented deepfake loss$25 million — fake Zoom meeting with AI-generated CFO (Jan 2024)
Criminal AI tools in useGhostGPT, self-hosted Ollama (no safety filters or guardrails)
Reference / sourceNYU Tandon — LLMs Execute Complete Ransomware Attacks

Researchers at NYU Tandon School of Engineering released a paper in August 2025 that, at the very least, ought to have shocked the cybersecurity community. They had developed a working proof-of-concept ransomware system, internally dubbed Ransomware 3.0, that used a large language model as its operational brain to carry out every stage of an attack on its own. mapping of systems. Identification of files. encryption or data theft. creation of ransom notes, written in the victim’s native tongue, with the intention of intimidating and coercing.

Without a human operator guiding every step, the system took care of everything. During routine testing, the team uploaded the prototype to VirusTotal. Cybersecurity company ESET discovered it, investigated it, and declared it to be active criminal malware, naming it PromptLock. They believed they had discovered the first ransomware using AI to be used in the wild. Regarding what it was, they were correct. The only thing they were mistaken about was who made it.

The cost is what the researchers pointed out, and it merits more consideration than it has gotten. Approximately 23,000 AI tokens were used during an entire attack cycle, including all phases. That comes to about $0.70 at commercial API pricing. 70 cents. And when attackers employ locally hosted open-source models, which is precisely what the more advanced criminal groups are already doing, that figure falls to zero.

There are no content filters, no abuse monitoring, and no telemetry that could alert a provider, such as Ollama, an open-source framework that enables anyone to run AI models on their own hardware. The more sophisticated criminal organizations are actively moving toward locally hosted, self-modified open-source models, according to SentinelOne researchers who have been observing LLM adoption among ransomware operators for a while. This is because they are aware that using commercial APIs leaves a trace.

One particular criminal tool serves as an example of how well-organized this has become. Since at least late 2024, GhostGPT has been making the rounds in underground forums. It has no limitations at all, unlike ChatGPT, Claude, or Gemini. It doesn’t hesitate to create phishing content, refuse to write malware, or hesitate to find software flaws. In February 2025, the FBI and CISA released a joint advisory regarding it. It’s still out there. It is still in use. Furthermore, its users may not be the technically proficient attackers that cybersecurity experts have been trained to spot.

It may not seem like it, but that final point is crucial. Instead of describing LLMs as creators of novel attack techniques, SentinelOne’s researchers refer to them as “operational accelerators”—a term that sounds clinical but has a concerning connotation. Attackers with skill are becoming quicker. Additionally, attackers who previously lacked the technical know-how to put together ransomware-as-a-service infrastructure are now truly dangerous.

A laptop and some patience are now all that are needed to launch a successful ransomware campaign, rather than actual expertise. From a single model, AI-generated malware can create 10,000 distinct variants, with 88% of those variants avoiding conventional detection. Approximately 450,000 new malware samples are recorded daily by the AV-TEST Institute. In a sense, security software that relies on identifying known signatures is fighting the final battle.

In reaction to all of this, it’s difficult to ignore how the organizational structure of cybercrime itself is changing. Something messier and more difficult to track is replacing the days of big, obvious ransomware cartels like LockBit, Conti, and REvil, which operated with something approaching corporate structure and brand recognition.

Termite, Punisher, and Obscura are examples of smaller crews that are emerging, attacking, disbanding, and reorganizing under different names. The distinction between organized crime and geopolitical operations is becoming more hazy due to the growing involvement of state-aligned actors in criminal affiliate ecosystems. It’s getting really hard to assign blame. If anything, this fragmented model is better suited to AI-assisted automation, which allows for smaller crews with lower overhead, more attacks against more targets with fewer personnel, and attacks that cost less than a cup of coffee. To be honest, it’s still unclear if the defense can close that gap before it widens considerably.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use