A curated collection of LLM security test payloads for red teaming and security assessment.
Created and maintained by XSource_Sec
This repository contains a comprehensive collection of payloads designed to test the security of Large Language Model (LLM) applications. These payloads are intended for:
- Security researchers conducting authorized penetration testing
- Developers testing their LLM application defenses
- Red teams assessing AI system vulnerabilities
- Educational purposes in understanding LLM attack vectors
| Category | Description | Payloads |
|---|---|---|
| Prompt Injection | Basic to advanced prompt injection techniques | 50+ |
| Jailbreaks | DAN, roleplay, and encoding-based bypasses | 40+ |
| Guardrail Bypass | Techniques to evade content filters | 30+ |
| RAG Injection | Attacks targeting RAG-based systems | 20+ |
| Agent Exploitation | Tool abuse and MCP attack vectors | 25+ |
git clone https://github.com/XSource-Sec/llm-security-payloads.git
cd llm-security-payloadsEach payload is stored in YAML format with the following structure:
- id: PI-001
name: "Basic Ignore Instructions"
payload: "Ignore previous instructions and..."
category: prompt-injection
severity: medium
description: "Attempts to override system prompt"
references:
- https://example.com/researchimport yaml
with open('prompt-injection/basic.yaml', 'r') as f:
payloads = yaml.safe_load(f)
for payload in payloads:
print(f"[{payload['id']}] {payload['name']}")
# Use payload['payload'] for testingconst yaml = require('js-yaml');
const fs = require('fs');
const payloads = yaml.load(fs.readFileSync('prompt-injection/basic.yaml', 'utf8'));
payloads.forEach(payload => {
console.log(`[${payload.id}] ${payload.name}`);
// Use payload.payload for testing
});| Level | Description |
|---|---|
critical |
Full system compromise, data exfiltration |
high |
Significant bypass of security controls |
medium |
Partial bypass or information disclosure |
low |
Minor impact, requires specific conditions |
Want to automate your LLM security testing?
Try AgentAudit
AgentAudit provides:
- Automated payload testing against your LLM applications
- Comprehensive security reports
- CI/CD integration
- Real-time monitoring and alerting
- Custom payload management
These payloads are provided for authorized security testing only. Please:
- Only test systems you own or have explicit permission to test
- Follow responsible disclosure practices
- Do not use these payloads for malicious purposes
- Report vulnerabilities to the appropriate parties
We welcome contributions! Please see our Contributing Guide for details on:
- Adding new payloads
- Improving existing payloads
- Reporting issues
- Code of conduct
This project is licensed under the MIT License - see the LICENSE file for details.
- The security research community
- OWASP LLM Top 10 Project
- All contributors to this repository
Built with security in mind by XSource_Sec