This repository contains probability calculations for Blood on the Clocktower, a social deduction game. The programs analyze win probabilities for the Good and Evil teams under different game scenarios using various computational approaches including dynamic programming and Monte Carlo simulation (which can be viewed as sampling from a Markov chain).
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them down into simpler subproblems and storing the results to avoid redundant calculations. In this repository, dynamic programming is used to calculate exact probabilities recursively.
How it works:
- Memoization: Results of subproblems are cached (stored) using
@lru_cachedecorator - Recursive decomposition: The problem is broken into smaller, overlapping subproblems
- Bottom-up or top-down: Solutions are built from base cases upward, or computed recursively with caching
In this codebase:
skip_day_1.pyuses DP to calculate win probabilities for each game state(g, m)where:g= number of good players alivem= number of minions alive
- Each state's probability depends on probabilities of reachable future states
- Memoization ensures each state is calculated only once, making the algorithm efficient
Example: To calculate f(5, 1) (5 goods, 1 minion), we need f(4, 0) and f(3, 1). Those in turn need other states, but with memoization, each unique state is computed exactly once.
A Markov Chain is a stochastic model describing a sequence of possible events where the probability of each event depends only on the state attained in the previous event (the Markov property). In Blood on the Clocktower, each game state transitions probabilistically to the next state based on random executions and kills.
Monte Carlo Simulation is a computational technique that uses random sampling to approximate probabilities. By simulating many games and counting outcomes, we can estimate win probabilities.
How it works:
- State representation: Each game state is defined by which players are alive
- Transition probabilities: From each state, we can transition to new states based on:
- Random executions (each alive player has equal probability)
- Demon kills (deterministic: always kills a good player)
- Simulation: Run many independent game simulations with random number generation
- Estimation: Count wins/losses to estimate probabilities
In this codebase:
markov_chain.pyuses Monte Carlo simulation to estimate win probabilities- Each simulation follows the Markov chain of game states
- With enough simulations (default: 1,000,000), the estimates converge to true probabilities
- This approach is useful for validation and when exact calculation is complex
Advantages:
- Can handle complex scenarios that are difficult to model analytically
- Provides distributions of outcomes, not just probabilities
- Can track multiple statistics simultaneously
Trade-offs:
- Provides estimates, not exact values (though very accurate with many simulations)
- Computationally intensive (requires many simulations)
- Exact methods (like DP) are preferred when possible for precision
Calculates the exact probability that the Demon wins the game under a simplified model.
- N players total, with 1 Demon
- No kill on Night 1 (first night is safe)
- Day 1: Random execution among all players
- Starting Night 2: Demon kills exactly 1 good player each night
- Each Day: Random execution among all alive players, EXCEPT:
- If a day would start with exactly 4 players alive, the town skips execution
- The game proceeds directly to night, reducing to 3 players
- Final 3: When exactly 3 players remain, a final execution occurs
- Demon wins if not executed (probability = 2/3)
python basic_probability.pyPrints a table showing the Demon's win probability for player counts from 7 to 15, displaying both exact fractions and decimal approximations.
N P(win) approx
7 2/3 0.666667
8 ...
Compares two Day 1 strategies for the Good team: executing on Day 1 versus skipping Day 1 execution. Calculates how this decision affects win probabilities for both Good and Evil teams.
- Uses dynamic programming with memoization (
@lru_cache) to efficiently calculate exact probabilities - Models the full game state including:
- g: Number of good players alive
- m: Number of minions alive
- Demon: Always 1 demon (included in calculations)
- Accounts for evil team sizes based on player count:
- 7-9 players: 2 evil (1 demon + 1 minion)
- 10-12 players: 3 evil (1 demon + 2 minions)
- 13-15 players: 4 evil (1 demon + 3 minions)
- Executions: Random among all alive players
- Demon kills: Only good players at night
- Skip rule: If day starts with 4 alive, town skips to night (reduces to 3)
- Evil wins immediately: If no good players remain (g == 0)
python skip_day_1.py-
Console Table: Displays comparison for player counts 7-15 showing:
- Initial game state (g0, m0)
- Evil win probability (execute vs skip)
- Good win probability (execute vs skip)
- Difference in Good win probability (skip - execute)
-
CSV Export: Creates
skip_day1_comparison.csvwith detailed data including:- Exact fractional probabilities
- Decimal approximations
- All calculated metrics
N | (g0,m0) | EvilWin_execute | EvilWin_skipDay1 | GoodWin_execute | GoodWin_skipDay1 | ΔGood(skip-exec)
----------------------------------------------------------------------------------------------------------
7 | ( 5,1) | 2/3 ≈ 0.666667 | ... | 0.333333 | ... | +0.000000
...
The core algorithm uses dynamic programming with memoization:
Key Function: f(g, m)
Recursively calculates the exact probability that Evil wins at the start of a day, given:
g: Number of good players alivem: Number of minions alive- Demon is always alive (included in total count)
DP Structure:
- State space: All pairs
(g, m)representing game states - Base cases:
- If
g == 0: Evil wins (probability = 1) - If total alive == 3: Evil wins with probability 2/3
- If
- Recurrence relation:
Where
f(g, m) = (m/T) × f(g-1, m-1) + (g/T) × f(g-2, m) + (1/T) × 0T = g + m + 1(total alive), and each term represents:- Execute minion: probability
m/T, transitions to(g-1, m-1) - Execute good: probability
g/T, transitions to(g-2, m)(demon kills one good at night) - Execute demon: probability
1/T, Evil loses (contributes 0)
- Execute minion: probability
- Memoization:
@lru_cache(None)ensures each state is computed exactly once
Time Complexity: O(g × m) states, each computed once = O(g × m) total Space Complexity: O(g × m) for memoization cache
Uses Monte Carlo simulation to estimate win probabilities by simulating many games and counting outcomes. This approach samples from the Markov chain of game states.
- N players with appropriate evil team sizes (same as
skip_day_1.py) - No kill on Night 1
- Day/Night cycle:
- Day: Random execution (unless 4 alive, then skip to night)
- Night: Demon kills exactly 1 good player (starting Night 2)
- Win conditions:
- Good wins: Demon is executed
- Evil wins:
- All alive players are evil, OR
- Demon survives final 3 execution (2/3 probability)
- Simulation: Runs 1,000,000 independent game simulations per player count
- Random seed: Uses deterministic seeds for reproducibility
- State tracking: Tracks which players are alive and their roles
- Outcome collection: Records winner, cause of win, and final state
python markov_chain.py-
Console Summary: For each player count (7-15), displays:
- Evil team size
- Evil win rate (estimated probability)
- Distribution of win causes (demon_executed, final_survive, all_evil_alive)
- Distribution of how many evil players were alive at game end
-
CSV Export: Creates
markov_chain_results.csvwith:- Win rates for both teams
- Counts for each win cause
- Distribution of evil players alive at game end
N= 7 | evil=2 | evil_win=0.667 | causes={'final_survive': 444444, 'demon_executed': 333333, ...} | evil_alive_end_dist={...}
...
- Validation: Can be used to validate exact calculations from
skip_day_1.py - Statistics: Provides additional statistics like win cause distributions
- Flexibility: Easy to modify game rules and see effects
- Convergence: With 1M simulations, estimates are accurate to ~0.001
While dynamic programming gives exact results, Monte Carlo simulation:
- Provides empirical validation of exact calculations
- Can handle more complex scenarios that are difficult to model analytically
- Gives distributions and additional statistics beyond just win probabilities
- Demonstrates the Markov chain nature of the game (each state transitions probabilistically)
- Python 3.x
- Standard library only (no external dependencies)
| Program | Method | Precision | Use Case |
|---|---|---|---|
basic_probability.py |
Direct calculation | Exact | Simple model, quick results |
skip_day_1.py |
Dynamic Programming | Exact | Complex state space, exact probabilities needed |
markov_chain.py |
Monte Carlo Simulation | Approximate (~0.001) | Validation, additional statistics, complex scenarios |
- Exact calculations (
basic_probability.py,skip_day_1.py): Use exact fractions (Fractionfromfractionsmodule) for precision, displayed as both fractions and decimal approximations - Monte Carlo (
markov_chain.py): Provides estimates that converge to true probabilities with more simulations (default: 1M simulations per player count) - All programs assume:
- Random executions among all alive players
- Demon kills only good players at night
- Optimal play within the modeled constraints
- The dynamic programming approach in
skip_day_1.pyis more efficient for exact calculations, while Monte Carlo is useful for validation and exploring complex scenarios