Analyze text to determine whether it was written by a human, generated by AI, or created through human-AI collaboration with detailed reasoning across 8 detection dimensions, local slop detection, color-coded passage highlighting, and cryptographic attestation support.
- Two-stage detection — Local heuristic-based slop detection followed by semantic LLM analysis for improved accuracy
- Local slop detector — Fast pattern-based detection of 50+ known AI writing patterns without API calls
- Detection score and classification — Every analysis returns a 0-100 score (0 = definitely human, 100 = definitely AI) with a three-way classification: Human, AI, or Collab
- Confidence levels — High, medium, or low confidence in the classification based on signal strength
- 8 detection dimensions — Evaluates vocabulary patterns, sentence structure, coherence, stylistic markers, factual patterns, structural patterns, error patterns, and temperature markers
- AI and human indicators — Specific patterns identified in the text that suggest AI or human authorship
- Detailed reasoning — Per-dimension analysis with findings and directional leans
- Highlighted passages — Color-coded text showing AI-like (red), human-like (green), or collaboration (orange) segments with tooltips explaining each
- attest.ink integration — Generate cryptographic attestations recording the AI/human breakdown directly from analysis results
- Attestation signing — Sign attestations with MetaMask (Ethereum) or password-derived keys for cryptographic proof
- Email header generation — Copy a formatted header showing analysis breakdown for prepending to emails/newsletters
- Email forwarding service — Deploy a companion API to analyze emails by forwarding them (see email service docs)
- Programmatic API — JavaScript functions for programmatic analysis (
window.whodoneitAnalyze,window.whodoneitEmailHeader) - URL fetching — Paste a blog post or article URL to automatically extract and analyze its main content
- Fetch & Analyze — One-click extraction and immediate analysis from any URL
- Multi-provider support — Puter (free), OpenRouter, Anthropic, OpenAI, Google Gemini, Ollama (local), and any OpenAI-compatible custom endpoint
- Fully serverless — Runs entirely in the browser with no backend required (GitHub Pages compatible)
- Share Content — Share a prefilled URL of your content, with or without auto-analyze, via a share modal
- URL routing — Prefill content or fetch URLs via query parameters and optionally auto-analyze on page load
- Smart loading UX — Preflight connection checks, progressive status updates, and slow-generation hints
- Automatic fallback — When Puter errors occur, a guided Ollama setup walkthrough appears
The local slop detector scans text for known AI writing patterns before sending to the LLM. This provides:
- Instant feedback — See a preliminary score in ~10ms without any API call
- Improved calibration — LLM receives pre-analysis data to guide its semantic evaluation
- Transparency — Specific patterns are identified with exact quotes from the text
| Priority | Examples |
|---|---|
| Critical | Antithetical constructions ("It's not about X, it's about Y"), sycophantic openings ("Great question!"), colon declarations, AI self-references, excessive lists |
| High | Formulaic transitions (However, Moreover, Furthermore), formulaic openers/closers, em dash overuse, pseudo-profound statements |
| Medium | Intensifier clusters (incredibly, extremely), hedge words, AI vocabulary (landscape, ecosystem, leverage, delve, unpack), balanced perspectives, passive voice |
| Low | Semicolons before transitions, nominalizations |
- Informal language (gonna, wanna, btw, lol, yeah, nah)
- Personal anecdotes ("I remember when", "My friend told me")
- Genuine uncertainty ("I'm not sure but", "Don't quote me")
- Colloquialisms and emotional authenticity (ugh, meh, yikes)
- Self-corrections and tangents ("wait, actually", "sorry I'm rambling")
- Specific concrete details (exact dates, dollar amounts, named people)
- Select an API provider (Puter GPT-OSS is free and requires no key)
- Either paste content directly into the text area, or enter a URL and click Fetch or Fetch & Analyze
- Click Analyze Content
- The tool runs local slop detection first, then sends content with pre-analysis to the LLM
- Get back a structured analysis with score, classification, confidence, AI/human indicators, detailed reasoning across 8 dimensions, and highlighted passages
The two-stage approach combines fast heuristic pattern matching with deep semantic analysis for more accurate and explainable results.
Every piece of content is evaluated across these 8 dimensions:
| Dimension | What It Examines |
|---|---|
| Vocabulary Patterns | Word choice, lexical diversity, unusual terms, jargon usage, and whether vocabulary feels natural or curated |
| Sentence Structure | Length variation, complexity, rhythm, parallelism, and whether structure feels organic or formulaic |
| Coherence & Flow | Logical transitions, narrative consistency, paragraph connections, and natural progression |
| Stylistic Markers | Personal voice, idioms, cultural references, humor, and authentic stylistic fingerprints |
| Factual Patterns | Hedging language, certainty claims, verifiable statements, and how facts are presented |
| Structural Patterns | Formatting, organization, formulaic elements, lists, and overall document structure |
| Error Patterns | Typos, grammatical quirks, inconsistencies, and whether errors feel human or suspiciously absent |
| Temperature Markers | Predictability, creative variation, unexpected elements, and statistical "temperature" of word choices |
Each dimension reports whether it leans toward AI, human, or is inconclusive, along with specific findings.
- Detection score (0-100) with Human/AI/Collab classification
- Confidence level (high, medium, low)
- Human/AI probability bar visualization
- AI indicators — specific patterns suggesting AI authorship
- Human indicators — specific patterns suggesting human authorship
- Detailed reasoning — per-dimension analysis with findings and directional leans
- Original text with color-coded passages:
- Green — Human-like characteristics
- Orange — AI-like characteristics
- Gray — Collaboration/mixed characteristics
- Click any passage to see the explanation
- Legend for quick reference
Full structured response from the model, syntax-highlighted and downloadable as JSON, TXT, or Markdown.
| Provider | apiMode |
Auth | CORS | Notes |
|---|---|---|---|---|
| Puter GPT-OSS | puter |
None required | Yes | Free, no API key. Default provider. |
| OpenRouter | openrouter |
Bearer token | Yes | Hundreds of models. Recommended for browser use with a key. |
| Anthropic | anthropic |
x-api-key | With header | Uses Messages API format. |
| OpenAI | openai |
Bearer token | No | Requires CORS proxy for browser use. |
| Google Gemini | google |
API key in URL | Yes | Uses generateContent format. |
| Ollama | ollama |
None required | Requires OLLAMA_ORIGINS=* |
Local models. No API key, works from GitHub Pages. |
| Custom | custom |
Bearer token | Varies | Any OpenAI-compatible endpoint. |
Puter is completely free with no API key required. It uses Puter's user-pays model — you as a developer pay nothing. Each user covers their own AI inference through their Puter account. Users get a free usage allowance; any usage beyond that is billed to their own account, not yours.
Models available:
gpt-oss-120b— 117B parameters, best quality, slower (30-60 seconds)gpt-oss-120b:exacto— precise variant for structured outputgpt-oss-20b— 21B parameters, faster, slightly lower quality
Access hundreds of models through a single CORS-friendly API. Recommended when you want to use Claude, GPT, Gemini, Llama, Mistral, or other models without CORS issues. Get a key at openrouter.ai/keys.
Run models locally with no API key, no usage limits, and no data leaving your machine. The default model is gpt-oss:20b. Start with OLLAMA_ORIGINS=* for browser access from GitHub Pages.
Quick setup:
# Install
curl -fsSL https://ollama.com/install.sh | sh
# Pull model
ollama pull gpt-oss:20b
# Start with browser access
OLLAMA_ORIGINS=* ollama serveWindows (PowerShell):
ollama pull gpt-oss:20b
Stop-Process -Name ollama -Force
$env:OLLAMA_ORIGINS="*"; ollama serveConnect directly to provider APIs. Get keys at:
- Anthropic: console.anthropic.com
- OpenAI: platform.openai.com
- Google: aistudio.google.com
Prefill content or fetch from a URL via query parameters:
Content routing:
https://97115104.github.io/whodoneit/?content=Your+content+text+here
URL routing (fetches and extracts main content):
https://97115104.github.io/whodoneit/?url=https://example.com/blog-post
Add &enter to auto-analyze on page load:
https://97115104.github.io/whodoneit/?url=https://example.com/blog-post&enter
The bare ?= format also works for content:
https://97115104.github.io/whodoneit/?=Your+content+text+here
The tool can extract main content from blog posts, articles, and similar pages:
- Enter a URL in the "Fetch Content from URL" field
- Click Fetch to extract content into the text area, or Fetch & Analyze to extract and immediately analyze
- Supported platforms include personal blogs, Substack, Medium-style sites, WordPress, Ghost, and standard CMS platforms
The extractor:
- Uses CORS proxies for cross-origin requests
- Targets common content selectors (
article,.post-content,.entry-content,main, etc.) - Removes navigation, sidebars, scripts, ads, and other non-content elements
- Preserves paragraph structure and formatting
The Share Content button opens a modal with two options:
- Copy link — prefills the content for the recipient
- Copy link with auto-analyze — prefills and triggers analysis automatically on load
AI detection cannot be exact. This tool provides probability estimates and pattern analysis, not definitive proof. No AI detector can achieve 100% accuracy because:
- Humans can write in ways that appear AI-like (formal, structured, repetitive)
- AI can be prompted to write in human-like styles with intentional imperfections
- Collaborative content naturally blends both characteristics
- Writing style varies enormously across individuals, languages, and contexts
Use this tool as one input among many when evaluating content authenticity. The analysis is most useful for understanding why content exhibits certain patterns, not for definitive authorship claims.
After analysis, you can create a cryptographic attestation recording the AI/human breakdown:
- Run an analysis on your content
- Enter a name for the content (e.g., "Newsletter Issue #42")
- Authorship type is automatically set based on analysis findings (Human ≤25, AI ≥75, otherwise Collaboration)
- Optionally select a signature method for stronger verification:
- No Signature — Unsigned attestation (default)
- MetaMask (Ethereum) — Sign with your Ethereum wallet via personal_sign
- Password-Derived Key — PBKDF2 + HMAC-SHA256 signature using a password
- Click Generate Attestation to create an attest.ink verification URL
- The attestation records: score, classification, human %, AI %, timestamp, and optional signature
Attestations are compatible with attest.ink and can be independently verified. Signed attestations provide cryptographic proof of who created the attestation.
| Method | Security | Requirements |
|---|---|---|
| None | Basic | None |
| MetaMask | High | MetaMask extension installed |
| Password | Medium | Remember your password for verification |
For newsletters and emails, generate a formatted header showing the analysis breakdown:
- Run an analysis
- Click Copy Email Header in the attestation section
- Paste at the top of your email/newsletter
Example output:
=== AI Disclosure ===
Source: whodoneit (97115104.github.io/whodoneit)
Human: 76% | AI: 24%
Classification: Collab (Medium Confidence)
Model: Assisted (Human + AI)
===================
Analyze emails by forwarding them to a special address. This requires deploying a separate API service.
- Forward a suspicious email to
analyze@your-domain.com - The service extracts and analyzes the forwarded content
- You receive a reply with the AI detection report
- Vercel account (free tier works)
- Email service (Postmark, SendGrid, or Mailgun - all have free tiers)
- Custom domain (optional, but recommended)
User forwards email → Email Service (Postmark) → Vercel API → Analysis → Reply email
- Create a new Vercel project
- Port the slop detector to the API
- Set up Postmark inbound webhook
- Configure environment variables
- Deploy
See docs/email-service-whodoneit.md for complete implementation guide.
Use whodoneit programmatically in your own JavaScript:
// Run analysis on text
const result = await window.whodoneitAnalyze("Your text here");
console.log(result.score, result.classification);
// Generate email header from current analysis
const header = window.whodoneitEmailHeader();
console.log(header);The API uses the currently selected provider settings.
- Quality Prompts — Transform ideas into production-ready prompts
- Assess Prompts — Evaluate and score prompt quality
- attest.ink — Cryptographic attestation for content authorship
- Write Like Me — Train AI to write in your style