Internal Safety Collapse: Turning LLMs or AI Agent into a harmful dataset generator.
-
Updated
Mar 30, 2026 - Python
Internal Safety Collapse: Turning LLMs or AI Agent into a harmful dataset generator.
Research log — tracking the path to AGI through daily paper analysis, replication studies, and architecture experiments
Interactive visualization of METR AI agent time horizon benchmark with exponential projections at 3, 6, 12, 18, 24, and 36 months. Tracks p50/p80 task-completion horizons across 22 frontier models (2019-2026).
Human-as-API for frontier models — compile prompts, deliver via Telegram, inject replies back into Pi
Add a description, image, and links to the frontier-models topic page so that developers can more easily learn about it.
To associate your repository with the frontier-models topic, visit your repo's landing page and select "manage topics."