Skip to content

XSource-Sec/llm-security-payloads

Repository files navigation

LLM Security Payloads

License: MIT GitHub Stars Contributors PRs Welcome

A curated collection of LLM security test payloads for red teaming and security assessment.

Created and maintained by XSource_Sec


Overview

This repository contains a comprehensive collection of payloads designed to test the security of Large Language Model (LLM) applications. These payloads are intended for:

  • Security researchers conducting authorized penetration testing
  • Developers testing their LLM application defenses
  • Red teams assessing AI system vulnerabilities
  • Educational purposes in understanding LLM attack vectors

Categories

Category Description Payloads
Prompt Injection Basic to advanced prompt injection techniques 50+
Jailbreaks DAN, roleplay, and encoding-based bypasses 40+
Guardrail Bypass Techniques to evade content filters 30+
RAG Injection Attacks targeting RAG-based systems 20+
Agent Exploitation Tool abuse and MCP attack vectors 25+

Installation

git clone https://github.com/XSource-Sec/llm-security-payloads.git
cd llm-security-payloads

Usage

Each payload is stored in YAML format with the following structure:

- id: PI-001
  name: "Basic Ignore Instructions"
  payload: "Ignore previous instructions and..."
  category: prompt-injection
  severity: medium
  description: "Attempts to override system prompt"
  references:
    - https://example.com/research

Loading Payloads (Python)

import yaml

with open('prompt-injection/basic.yaml', 'r') as f:
    payloads = yaml.safe_load(f)

for payload in payloads:
    print(f"[{payload['id']}] {payload['name']}")
    # Use payload['payload'] for testing

Loading Payloads (JavaScript)

const yaml = require('js-yaml');
const fs = require('fs');

const payloads = yaml.load(fs.readFileSync('prompt-injection/basic.yaml', 'utf8'));

payloads.forEach(payload => {
    console.log(`[${payload.id}] ${payload.name}`);
    // Use payload.payload for testing
});

Severity Levels

Level Description
critical Full system compromise, data exfiltration
high Significant bypass of security controls
medium Partial bypass or information disclosure
low Minor impact, requires specific conditions

Automated Testing

Want to automate your LLM security testing?

AgentAudit provides:

  • Automated payload testing against your LLM applications
  • Comprehensive security reports
  • CI/CD integration
  • Real-time monitoring and alerting
  • Custom payload management

Responsible Disclosure

These payloads are provided for authorized security testing only. Please:

  1. Only test systems you own or have explicit permission to test
  2. Follow responsible disclosure practices
  3. Do not use these payloads for malicious purposes
  4. Report vulnerabilities to the appropriate parties

Contributing

We welcome contributions! Please see our Contributing Guide for details on:

  • Adding new payloads
  • Improving existing payloads
  • Reporting issues
  • Code of conduct

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • The security research community
  • OWASP LLM Top 10 Project
  • All contributors to this repository

Built with security in mind by XSource_Sec

About

200+ curated LLM security test payloads for AI red teaming

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •