Understanding LLMjacking: A Complete Guide

An Introduction to Server-Side Request Forgery
An Introduction to Server-Side Request Forgery (SSRF)
January 9, 2025
Windows Telephony Services 2025 Patch Diffing And Analysis Pt 1
Windows Telephony Services: 2025 Patch Diffing & Analysis Part 1
February 6, 2025

January 31, 2025

A new security threat, known as LLMjacking or LLM Jacking, has emerged on the cybersecurity landscape. LLMjacking refers to a methodology used by threat actors in which they use stolen cloud credentials and infiltrate cloud-hosted LLMs (Large Language Models). 

This allows threat actors to gain unauthorized access to LLM services offered by major cloud service providers, such as AWS, Google Cloud, and Azure, without incurring any cost. Since LLM services incur significant costs, they have become enticing targets for threat actors. 

In essence, this is an act of LLM hijacking. Attackers can use these LLM models for various purposes, such as generating casual queries, data theft, or model poisoning.

In this blog post, we’ll dig deeper to understand LLMjacking – What is it, how it works, its associated risks, ways to detect & prevent them, and real-world examples of LLMjacking.


What is LLMjacking?

Put simply, in LLMjacking, threat actors use stolen cloud credentials to gain unauthorized access to cloud-hosted Large Language Models (LLMs). This can result in significant financial losses since LLMs need considerable computational resources to operate.

For instance, the costs for models like Claude 3 Opus can surpass $100,000 daily, leaving victims to cover the expenses while attackers benefit from free access.

Previously, cybercriminals primarily targeted LLMs that were already accessible through compromised accounts. However, recent patterns indicate that attackers are now adopting a more proactive strategy. 

Nowadays, they use stolen credentials to activate and run LLMs on cloud platforms, such as Amazon Bedrock. This adds to the challenges faced by businesses that are already trying to navigate cloud security risks.


How LLMjacking Works

On a more fundamental level, LLMjacking is akin to cryptojacking, where malicious actors anonymously use the enterprise’s processing power to mine cryptocurrencies. These threats are relatively new, but the incidents are growing rapidly.  

Here is a detailed overview of attack vectors and exploitation techniques involved:


Common LLMjacking Attack Vectors

Below is a list of attack vectors:

  • Stolen Cloud Credentials: Hackers usually target vulnerabilities in software which are not easily visible and steal cloud credentials. It enables them to gain unauthorized access to LLM services. For example, in a recent LLMjacking attack, attackers obtained credentials from a system that had a vulnerable version of Laravel (CVE-2021-3129). 
  • Phishing: Phishing is among the most common tricks applied by threat actors to steal credentials. For that, hackers send deceptive emails that appear to be coming from genuine sources. This prompts users to click on the link and submit their sensitive information. 
  • API Hijacking: In the API hijacking process, attackers connect to LLMs through APIs. It links them directly to malicious applications, which can be used for exploitation. 
  • Prompt Injection: Prompt injection refers to a sophisticated method commonly used by attackers in which they can manipulate LLMs to run unintended commands. It can involve feeding adversarial inputs through APIs or chat interfaces.

How Attackers Exploit LLMs

Attackers can exploit large language models (LLMs) in several ways. Here are some common methods they might use:

  • Unpatched Software: Unpatched software can serve as an entry point for threat actors. They can use it to infiltrate cloud systems and gain unauthorized access to your LLMs.
  • Reverse Proxy Usage: Hackers often use proxy tools, such as OAI Reverse Proxy, to gain unauthorized access to multiple LLM accounts without exposing cloud credentials.
  • Resource Consumption: Once threat actors infiltrate cloud networks, they can use LLMs to generate a high volume of queries, placing a significant financial burden on legitimate account holders.

Risks and Consequences of LLMjacking

LLMjacking can have serious implications extending beyond financial losses: 

Financial Implications

LLMjacking can result in significant financial losses, potentially exceeding $100,000 per day. The extent of the loss depends on the sophistication of the LLM and the scale of the attack. Attackers can execute a high volume of queries, leading to inflated cloud service bills, which can rapidly deplete an organization’s budget.

Data Security Risks

LLMjacking poses serious data security threats. Malicious actors can exfiltrate sensitive information, such as proprietary data and customer details from compromised LLMs. As a result, organizations face costly remediation efforts.

Operational Disruptions

Unauthorized use of large language models can disrupt operations. Cybercriminals may trigger unauthorized actions or overload systems, leading to downtime or degraded performance. Such incidents often lead to prolonged recovery periods, causing significant productivity and revenue losses.

Reputational Damage

LLMjacking instances can damage brand reputation, undermining customer trust in the business. The negative impact on the brand image can persist for years.


How to Detect LLMjacking Attacks

Identifying LLMjacking is not easy, but certain methods can help: 

Implement Strong Access Controls

A weak access control mechanism is one of the primary causes of security breaches, including LLMjacking. Implementing role-based access controls and multi-factor authentication can strengthen overall security. Once these measures are in place, establish a baseline pattern.  Any deviation from this baseline requires further examination.

Employ Comprehensive Monitoring Tools

Utilizing automated monitoring tools can help track real-time usage patterns for LLMs. These tools provide detailed insights into user activities. This proactive approach can minimize LLMjacking risks.

Utilize Anomaly Detection

Once baseline metrics are established, identifying unusual patterns in LLM usage becomes easier. Organizations can quickly spot deviations from normal patterns. 


How to Prevent LLMjacking Attacks

Strategies to Prevent LLMjacking: Strong Authentication, Strict Controls, Regular Security Audits and Employee Training

Here are four key strategies to mitigate LLMjacking security risks:

Implement Strong Authentication

Using robust authentication methods, such as multi-factor authentication (MFA) ensures only those authorized to use it can access LLMs. By just adding an extra verification layer, organizations can mitigate credential theft risks in LLMs. 

Conduct Regular Security Audits

Security is not a one-and-done task—it’s a continuous process. Conducting regular security audits helps detect vulnerabilities, if any, in the LLM-hosted cloud environment. This enables proactive identification of weaknesses before attackers can exploit them and gain access to your network.

Establish Strict Access Controls and Permissions

Grant access only to legitimate users with necessary role-based permissions within your cloud network. It minimizes the potential attack surface and improves monitoring.

Employee Training and Awareness

Conducting regular training sessions can raise awareness about various threats, such as phishing, social engineering tactics, and the importance of avoiding inadvertent sharing of sensitive information with third parties. This helps foster a strong security culture within the organization.


LLMjacking: Real World Examples

Stolen Credentials from Laravel Vulnerabilities

In May 2024, hackers exploited the CVE-2021-3129 vulnerability in Laravel to steal cloud credentials. This breach allowed them to access several LLM services, including Anthropic’s Claude model. The incident highlighted the true cost of such attacks. The breach resulted in a cost exceeding $46,000 per day for the victim. 

Surge in LLM Requests

Sysdig reported an LLMjacking incident in July 2024 when it detected a 10x increase in LLM requests. Threat actors used stolen AWS credentials to exploit AI runtime services, such as Amazon Bedrock. The incident was discovered when a large number of suspicious requests from unique IP addresses were detected.

These incidents underscore the need for enhanced monitoring and protection of credentials to avoid such incidents.


The Future of LLMjacking

With the rise in generative AI use cases, LLMjacking instances are expected to increase. The threat landscape is also evolving as cybercriminals leverage sophisticated methods—such as automation, bots, and open-source tools—to exploit vulnerabilities in cloud-hosted LLMs.


Final Verdict 

On a final note, LLMjacking abuse can result in significant financial and security risks. The risks go beyond financial losses, extending to model poisoning. This may result in generating malicious content,eroding public trust in AI. Organizations, therefore, must strengthen their cloud defense mechanisms to prevent such attacks. Implementing enhanced security protocols will be crucial for quickly identifying and mitigating unauthorized access.

Looking for a reliable partner to protect your cloud network? SecureLayer7 experts can help. Contact us today to learn more. 

Discover more from SecureLayer7 - Offensive Security, API Scanner & Attack Surface Management

Subscribe now to keep reading and get access to the full archive.

Continue reading