LLMs as Weapons: A New Era of Cyber Threats
David Hussain 3 Minuten Lesezeit

LLMs as Weapons: A New Era of Cyber Threats

The rapid development of Artificial Intelligence, particularly Large Language Models (LLMs) like Google Gemini or OpenAI’s ChatGPT, has the potential to revolutionize our world. Unfortunately, these powerful tools have not gone unnoticed by cybercriminals. Threat actors have long moved from mere productivity enhancement (e.g., crafting better phishing emails) to the active weaponization of AI in malware and attack methods.
cyberbedrohung k-nstliche-intelligenz large-language-models malware cyberkriminalit-t bedrohungsanalyse promptsteal

The rapid development of Artificial Intelligence, particularly Large Language Models (LLMs) like Google Gemini or OpenAI’s ChatGPT, has the potential to revolutionize our world. Unfortunately, these powerful tools have not gone unnoticed by cybercriminals. Threat actors have long moved from mere productivity enhancement (e.g., crafting better phishing emails) to the active weaponization of AI in malware and attack methods.

The Google Threat Intelligence Group (GTIG) warns that we are entering a new phase of AI-driven cyber warfare.

1. PROMPTSTEAL: The Malware That Asks AI for Help

One of the most unsettling examples of the new generation of AI malware is PROMPTSTEAL.

What is PROMPTSTEAL?

Unlike traditional malware that executes fixed commands, PROMPTSTEAL uses LLMs as a kind of malicious external brain.

  • The Modus Operandi: The malware is installed on a target system. Instead of executing predefined actions, PROMPTSTEAL sends an API request to a publicly accessible or compromised LLM (e.g., via an API from Hugging Face).
  • The Malicious Prompt: This request contains instructions like: “Conduct an inventory of files on the desktop and exfiltrate sensitive documents.”
  • The Response: The LLM, without recognizing the malicious intent, generates the code or command-line syntax needed for data theft on the specific operating system (Windows, Linux, etc.).
  • The Execution: PROMPTSTEAL executes the AI-generated code.

This approach makes it extremely difficult for traditional signature-based security products to identify the malware, as the executed commands can change dynamically and contextually.

2. PROMPTFLUX: The Dynamic C2 Infrastructure

Another alarming concept being discussed in the threat actor community is PROMPTFLUX – an approach to malware command and control communication.

What is PROMPTFLUX?

PROMPTFLUX aims to hide the communication between malware on the infected system and the attacker’s Command-and-Control (C2) server behind an LLM API.

  • The Obfuscation: Instead of using a direct IP address or domain for communication (which can be easily blocked), the malware sends a harmless but encrypted or encoded request to a public LLM (e.g., “Write a poem about autumn in Normandy.”).
  • The Malicious Prompt: A specific part of the request or the metadata hides an instruction for the LLM to generate a specific public text, which then serves as an encrypted instruction for the malware.
  • The Decryption: The malware receives the AI response (the poem), extracts and decrypts the hidden message (the instruction for the next step), and knows what to do.

The result is a highly agile and elusive C2 infrastructure, as communication runs through a trusted LLM domain and the content of the communication is constantly altered by LLM generation.

Conclusion: How Can We Protect Ourselves?

The cases of PROMPTSTEAL and PROMPTFLUX show that LLMs are considered by threat actors not just as tools, but as active components of the attack chain.

Defenders must act now:

  1. Enhanced Endpoint Detection and Response (EDR): Systems must be able to detect unusual behavior patterns (e.g., unexpected API calls to LLM services by unknown processes) rather than relying solely on signatures.
  2. Behavioral Analysis: Focus on the actions of the malware (e.g., attempts at data exfiltration or execution of unusual shell commands), not just its origin.
  3. LLM Providers’ Responsibility: Providers like Google and OpenAI must further harden their models to more effectively prevent “prompt jailbreaks” (techniques for bypassing security and ethical guidelines).

The threat is dynamic – our defense must be too.

Ähnliche Artikel