LLM | Security | Operations in one github repo with good links and pictures.
-
Updated
Apr 21, 2026 - HTML
LLM | Security | Operations in one github repo with good links and pictures.
The Anti-Virus for AI Artifacts & RAG Firewall. A static analysis tool scanning Models and Notebooks for RCE, Datasets and RAG docs for Data Poisoning, PII, and Prompt Injections. Secure your AI Supply Chain.
RAG/LLM Security Scanner identifies critical vulnerabilities in AI-powered applications, including chatbots, virtual assistants, and knowledge retrieval systems.
Scanner for adversarial hubs in RAG and vector databases
Local RAG system with a built-in governance agent that filters sensitive or restricted information with separated agent logging systems to keep privacy and security
RAG Poisoning Lab — Educational AI Security Exercise
Complete roadmap to become an AI Security Engineer from zero to advanced — covering Python, ML, Deep Learning, LLM Engineering, RAG Security, Intrusion Detection, Anomaly Detection, and a full Master Project (AI-Powered Security Analyst).
The most comprehensive open-source mapping of OWASP GenAI risks to industry frameworks — 37 files, 16 frameworks, 3 source lists: LLM Top 10, Agentic Top 10, DSGAI 2026. OT/ICS, EU AI Act, NIST, ISO 27001, ISO 42001, CIS, SAMM, ENISA, NHI, AIVSS.
LLM Attack Testing Toolkit is a structured methodology and mindset framework for testing Large Language Model (LLM) applications against logic abuse, prompt injection, jailbreaks, and workflow manipulation.
An adversarial evaluation framework for LLM-integrated Security Operations Centers
An AI Security Testing Playbook with labs for prompt injection, RAG poisoning, and tool attacks
Deterministic security testing for RAG pipelines: measure retrieval-induced data leakage with CI-ready metrics.
Security case study: RAG prompt injection on AWS Bedrock Knowledge Bases + layered mitigations (retrieval scoping + output gating).
🤖 Build your own local Retrieval-Augmented Generation system for private, offline AI memory without ongoing costs or data privacy concerns.
Dual-Stage Temporal Poisoning Attack on RAG Systems
Omega Walls — a deterministic runtime security layer for RAG and AI agents that detects prompt injection, tool abuse, and data exfiltration via cumulative risk modeling.
Runtime defense for AI agents. 24 inline defenses, 3 output scanners, MCP server, framework adapters.
Reproducible security benchmarking for the Deconvolute SDK and AI system integrity against adversarial attacks.
AI Operations Security Maturity Model and toolkit to secure AI/ML systems across 11 domains and 5 levels—aligned to NIST AI RMF, SAIF, OWASP LLM Top 10, MITRE ATLAS. Practical AI security maturity model with assessment questions, CI/CD policy gates, LLM/RAG controls, infra/accelerator hardening, monitoring, IR, and red teaming.
Add a description, image, and links to the rag-security topic page so that developers can more easily learn about it.
To associate your repository with the rag-security topic, visit your repo's landing page and select "manage topics."