This repository contains the Python replication code and data for the paper:
Tacit Algorithmic Collusion: A Replication Study of Calvano et al. (American Economic Review, 2020) > Submitted to the Journal of Comments and Replications in Economics
This code reproduces and validates the findings of Calvano et al. (2020) regarding reinforcement learning-based pricing algorithms and tacit collusion. It translates the original Fortran implementation into Python and extends it with asymmetric competition analysis.
├── Code/
│ ├── analyze_policy_variance.py # Analysis of policy convergence variance
│ ├── combined_run_replication.py # Main script to run all replications
│ ├── multiple_firms.py # Extension: Analysis with >2 firms
│ ├── multiple_sessions.py # Parallel processing logic for simulations
│ ├── pricing_environment.py # Core environment logic (demand, profits)
│ ├── Q_Learning_Calvano.py # Core Q-Learning agent implementation
│ ├── replication_figure_1.py # Generates Figure 1 (Heatmap)
│ ├── replication_figure_3.py # Generates Figure 3 (Delta/Discount Factor)
│ ├── replication_figure_4.py # Generates Figure 4 (Impulse Response)
│ ├── replication_figure_10.py # Generates Figure 10 (Price Evolution)
│ └── run_asymmetric_analysis.py # Script for asymmetric cost/quality analysis
├── Data/
│ ├── asymmetry_summary_results.csv # Results from Table 4 replication
│ ├── fig4_data.npz # Pre-computed simulation data for Figure 4
│ ├── list_alphas_betas.csv # Data for Figure 1 heatmap
│ └── multiple_deltas_5000000_100.csv # Data for Figure 3
├── Figures/
│ ├── Delta_vs_Cost_Asymmetry_with_CI_5000000.svg # Visualization of asymmetric cost results
│ ├── figure_4_replication_from_saved.svg # Impulse response graph
│ ├── figure_10_1000.svg # Price evolution graph
│ ├── Heatmap_Alphas_Betas.svg # Heatmap of learning parameters
│ └── multiple_deltas.svg # Discount factor analysis graph
├── .gitignore
├── license
├── README.md
└── requirements.txt
- Python 3.8+
- NumPy
- Pandas
- Matplotlib
- Seaborn
- tqdm
Install dependencies using:
pip install -r requirements.txtThe pricing_environment.py implements the Bertrand competition setting from the paper:
- Configurable number of firms (default is 2)
- Logit demand system
- Discrete price space with 15 values
- Extension: Now supports asymmetric firms by accepting lists for costs and qualities.
- Extension: Automatically computes both symmetric and asymmetric Nash and joint-profit-maximizing price benchmarks.
Q_Learning_Calvano.py implements the reinforcement learning algorithm with the exact parameters from the paper:
- Standard Q-learning update rule
- State space as described in Section II.B
- Epsilon-greedy and Boltzmann exploration strategies.
from pricing_environment import PricingEnvironment
from Q_Learning_Calvano import TabularQLearningAgent
# Initialize environment
env = PricingEnvironment(num_firms=2)
# Initialize agents
agents = [TabularQLearningAgent(agent_id=i, env=env) for i in range(2)]
# Training loop
for episode in range(1500000): # As per paper specifications
state = env.reset()
done = False
while not done:
actions = [agent.get_action(state) for agent in agents]
next_state, rewards, done = env.step(actions)
for agent, action, reward in zip(agents, actions, rewards):
agent.update(state, action, reward, next_state)
state = next_stateTo replicate all figures in one run:
cd Code
python combined_run_replication.pyTo replicate Figure 1 (price dynamics):
python replication_figure_1.pyTo replicate Figure 3 (profit dynamics):
python replication_figure_3.pyTo replicate Figure 4 (impulse-response mechanism):
python replication_figure_4.pyTo replicate Figure 10 (impulse-response mechanism):
python replication_figure_10.pyThe combined_run_replication.py script will generate all necessary data and figures in the Figures/ and Data/ directories. Below is a map of the figures in the paper to the generated file names:
| Paper Figure/Table | Generating Script | Output File Name |
|---|---|---|
| Figure 1 (Profit Gain Heatmap) | replication_figure_1.py |
Figures/figure_1_price_path.svg |
| Figure 3 (Discount Factor) | replication_figure_3.py |
Figures/multiple_deltas.svg |
| Figure 4 (Impulse Response) | replication_figure_4.py |
Figures/Heatmap_Alphas_Betas.svg |
| Figure 10 (Price Evolution) | replication_figure_10.py |
Figures/figure_10_1000.svg |
| Table A2 (Asymmetric Costs) | run_asymmetric_analysis.py |
Data/asymmetric_costs_summary.csv |
Because the full simulation (5,000,000 periods per session) takes significant computational time (approx. 2-5 days depending on hardware), this repository includes the final generated data and figures in the Data/ and Figures/ directories.
This allows users to:
- Inspect the results immediately without running the long simulation.
- Verify the plotting scripts (e.g.,
replication_figure_4.py) using the pre-computed.npzdata.
Note: Running the combined_run_replication.py script will overwrite these files.
All parameters are set to match those in the paper:
- Number of firms: 2
- Price range: [0.5, 1.5] × monopoly price
- Demand function: Unit mass of consumers with logit demand
- Market size: 1.0
- Consumer preferences: As specified in Section II.A
- Learning rate (α): 0.15
- Discount factor (δ): 0.95
- Exploration rate (β): 4e-6
- Training episodes: 1.5M
- Convergence threshold: As specified in Section III.A
The code generates:
- Price trajectories matching Figure 1
- Profit dynamics matching Figure 3
- Impulse-Response Dynamics matching Figure 4
- Convergence statistics as reported in Section III
- Best-response analysis as in Section IV
If you use this replication package, please cite the original paper:
@article{calvano2020artificial,
title={Artificial intelligence, algorithmic pricing, and collusion},
author={Calvano, Emilio and Calzolari, Giacomo and Denicol{\`o}, Vincenzo and Pastorello, Sergio},
journal={American Economic Review},
volume={110},
number={10},
pages={3267--97},
year={2020}
}MIT License
I acknowledge the use of Claude AI by Anthropic (claude.ai) helping me with the understanding of the Fortran code by the authors and providing help during the programming of the replication.
For questions about the replication package, please open a GitHub issue.