Skip to content

RossFW/BotC_Probability

Repository files navigation

BotC Probability

This repository contains probability calculations for Blood on the Clocktower, a social deduction game. The programs analyze win probabilities for the Good and Evil teams under different game scenarios using various computational approaches including dynamic programming and Monte Carlo simulation (which can be viewed as sampling from a Markov chain).

Computational Methods

Dynamic Programming

Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them down into simpler subproblems and storing the results to avoid redundant calculations. In this repository, dynamic programming is used to calculate exact probabilities recursively.

How it works:

  1. Memoization: Results of subproblems are cached (stored) using @lru_cache decorator
  2. Recursive decomposition: The problem is broken into smaller, overlapping subproblems
  3. Bottom-up or top-down: Solutions are built from base cases upward, or computed recursively with caching

In this codebase:

  • skip_day_1.py uses DP to calculate win probabilities for each game state (g, m) where:
    • g = number of good players alive
    • m = number of minions alive
  • Each state's probability depends on probabilities of reachable future states
  • Memoization ensures each state is calculated only once, making the algorithm efficient

Example: To calculate f(5, 1) (5 goods, 1 minion), we need f(4, 0) and f(3, 1). Those in turn need other states, but with memoization, each unique state is computed exactly once.

Markov Chains & Monte Carlo Simulation

A Markov Chain is a stochastic model describing a sequence of possible events where the probability of each event depends only on the state attained in the previous event (the Markov property). In Blood on the Clocktower, each game state transitions probabilistically to the next state based on random executions and kills.

Monte Carlo Simulation is a computational technique that uses random sampling to approximate probabilities. By simulating many games and counting outcomes, we can estimate win probabilities.

How it works:

  1. State representation: Each game state is defined by which players are alive
  2. Transition probabilities: From each state, we can transition to new states based on:
    • Random executions (each alive player has equal probability)
    • Demon kills (deterministic: always kills a good player)
  3. Simulation: Run many independent game simulations with random number generation
  4. Estimation: Count wins/losses to estimate probabilities

In this codebase:

  • markov_chain.py uses Monte Carlo simulation to estimate win probabilities
  • Each simulation follows the Markov chain of game states
  • With enough simulations (default: 1,000,000), the estimates converge to true probabilities
  • This approach is useful for validation and when exact calculation is complex

Advantages:

  • Can handle complex scenarios that are difficult to model analytically
  • Provides distributions of outcomes, not just probabilities
  • Can track multiple statistics simultaneously

Trade-offs:

  • Provides estimates, not exact values (though very accurate with many simulations)
  • Computationally intensive (requires many simulations)
  • Exact methods (like DP) are preferred when possible for precision

Programs

1. basic_probability.py

Calculates the exact probability that the Demon wins the game under a simplified model.

Game Rules Modeled:

  • N players total, with 1 Demon
  • No kill on Night 1 (first night is safe)
  • Day 1: Random execution among all players
  • Starting Night 2: Demon kills exactly 1 good player each night
  • Each Day: Random execution among all alive players, EXCEPT:
    • If a day would start with exactly 4 players alive, the town skips execution
    • The game proceeds directly to night, reducing to 3 players
  • Final 3: When exactly 3 players remain, a final execution occurs
    • Demon wins if not executed (probability = 2/3)

Usage:

python basic_probability.py

Output:

Prints a table showing the Demon's win probability for player counts from 7 to 15, displaying both exact fractions and decimal approximations.

Example Output:

N	P(win)		approx
7	2/3		0.666667
8	...

2. skip_day_1.py

Compares two Day 1 strategies for the Good team: executing on Day 1 versus skipping Day 1 execution. Calculates how this decision affects win probabilities for both Good and Evil teams.

Key Features:

  • Uses dynamic programming with memoization (@lru_cache) to efficiently calculate exact probabilities
  • Models the full game state including:
    • g: Number of good players alive
    • m: Number of minions alive
    • Demon: Always 1 demon (included in calculations)
  • Accounts for evil team sizes based on player count:
    • 7-9 players: 2 evil (1 demon + 1 minion)
    • 10-12 players: 3 evil (1 demon + 2 minions)
    • 13-15 players: 4 evil (1 demon + 3 minions)

Game Mechanics:

  • Executions: Random among all alive players
  • Demon kills: Only good players at night
  • Skip rule: If day starts with 4 alive, town skips to night (reduces to 3)
  • Evil wins immediately: If no good players remain (g == 0)

Usage:

python skip_day_1.py

Output:

  1. Console Table: Displays comparison for player counts 7-15 showing:

    • Initial game state (g0, m0)
    • Evil win probability (execute vs skip)
    • Good win probability (execute vs skip)
    • Difference in Good win probability (skip - execute)
  2. CSV Export: Creates skip_day1_comparison.csv with detailed data including:

    • Exact fractional probabilities
    • Decimal approximations
    • All calculated metrics

Example Output:

N | (g0,m0) | EvilWin_execute | EvilWin_skipDay1 | GoodWin_execute | GoodWin_skipDay1 | ΔGood(skip-exec)
----------------------------------------------------------------------------------------------------------
 7 | ( 5,1) | 2/3 ≈ 0.666667 | ... | 0.333333 | ... | +0.000000
...

Dynamic Programming Implementation

The core algorithm uses dynamic programming with memoization:

Key Function: f(g, m) Recursively calculates the exact probability that Evil wins at the start of a day, given:

  • g: Number of good players alive
  • m: Number of minions alive
  • Demon is always alive (included in total count)

DP Structure:

  • State space: All pairs (g, m) representing game states
  • Base cases:
    • If g == 0: Evil wins (probability = 1)
    • If total alive == 3: Evil wins with probability 2/3
  • Recurrence relation:
    f(g, m) = (m/T) × f(g-1, m-1) + (g/T) × f(g-2, m) + (1/T) × 0
    
    Where T = g + m + 1 (total alive), and each term represents:
    • Execute minion: probability m/T, transitions to (g-1, m-1)
    • Execute good: probability g/T, transitions to (g-2, m) (demon kills one good at night)
    • Execute demon: probability 1/T, Evil loses (contributes 0)
  • Memoization: @lru_cache(None) ensures each state is computed exactly once

Time Complexity: O(g × m) states, each computed once = O(g × m) total Space Complexity: O(g × m) for memoization cache


3. markov_chain.py

Uses Monte Carlo simulation to estimate win probabilities by simulating many games and counting outcomes. This approach samples from the Markov chain of game states.

Game Rules Modeled:

  • N players with appropriate evil team sizes (same as skip_day_1.py)
  • No kill on Night 1
  • Day/Night cycle:
    • Day: Random execution (unless 4 alive, then skip to night)
    • Night: Demon kills exactly 1 good player (starting Night 2)
  • Win conditions:
    • Good wins: Demon is executed
    • Evil wins:
      • All alive players are evil, OR
      • Demon survives final 3 execution (2/3 probability)

Monte Carlo Method:

  1. Simulation: Runs 1,000,000 independent game simulations per player count
  2. Random seed: Uses deterministic seeds for reproducibility
  3. State tracking: Tracks which players are alive and their roles
  4. Outcome collection: Records winner, cause of win, and final state

Usage:

python markov_chain.py

Output:

  1. Console Summary: For each player count (7-15), displays:

    • Evil team size
    • Evil win rate (estimated probability)
    • Distribution of win causes (demon_executed, final_survive, all_evil_alive)
    • Distribution of how many evil players were alive at game end
  2. CSV Export: Creates markov_chain_results.csv with:

    • Win rates for both teams
    • Counts for each win cause
    • Distribution of evil players alive at game end

Example Output:

N= 7 | evil=2 | evil_win=0.667 | causes={'final_survive': 444444, 'demon_executed': 333333, ...} | evil_alive_end_dist={...}
...

Key Features:

  • Validation: Can be used to validate exact calculations from skip_day_1.py
  • Statistics: Provides additional statistics like win cause distributions
  • Flexibility: Easy to modify game rules and see effects
  • Convergence: With 1M simulations, estimates are accurate to ~0.001

Why Monte Carlo?

While dynamic programming gives exact results, Monte Carlo simulation:

  • Provides empirical validation of exact calculations
  • Can handle more complex scenarios that are difficult to model analytically
  • Gives distributions and additional statistics beyond just win probabilities
  • Demonstrates the Markov chain nature of the game (each state transitions probabilistically)

Requirements

  • Python 3.x
  • Standard library only (no external dependencies)

Comparison of Methods

Program Method Precision Use Case
basic_probability.py Direct calculation Exact Simple model, quick results
skip_day_1.py Dynamic Programming Exact Complex state space, exact probabilities needed
markov_chain.py Monte Carlo Simulation Approximate (~0.001) Validation, additional statistics, complex scenarios

Notes

  • Exact calculations (basic_probability.py, skip_day_1.py): Use exact fractions (Fraction from fractions module) for precision, displayed as both fractions and decimal approximations
  • Monte Carlo (markov_chain.py): Provides estimates that converge to true probabilities with more simulations (default: 1M simulations per player count)
  • All programs assume:
    • Random executions among all alive players
    • Demon kills only good players at night
    • Optimal play within the modeled constraints
  • The dynamic programming approach in skip_day_1.py is more efficient for exact calculations, while Monte Carlo is useful for validation and exploring complex scenarios

About

Github Repo for exploring probabilities and statistics related to BotC

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages