This code accompanies the paper Explainable Neuro-Inspired Representations Improve RL Performance on Visual Navigation Tasks (more details to follow after publication).
You can create the required Python environment from environment.yml. You additionally need to install this Miniworld fork locally.
The repository is structured into the following folders:
-
notebookscontains code to train hSFA and PCA feature extractors, load logged data from wandb and produce plots and tables for the paper -
scriptscontains scripts to train and evaluate RL agents -
transformerscontains pretrained hSFA and PCA transformers used for the paper.
You can train the feature extractors with notebooks/train_feature_extractors.ipynb.
Make sure to rename the trained hSFA or PCA transformer by appending a suffix such as starmaze, wallgap or fourcoloredrooms.
The resulting hSFA, PCA and CNN representations, or those generated by the provided pretrained transformers, can be analysed with notebooks/generate_sfa_plots.ipynb, notebooks/generate_pca_plots.ipynb and notebooks/generate_cnn_plots.ipynb. These notebooks generate the analysis plots used in the paper and can be modified to investigate additional hSFA/PCA/CNN features, too.
This functionality assumes a wandb account for logging.
You can train agents with the script scripts/train_rl.py. Make sure to adjust the entity and project name used for wandb logging.
On a headless machine, you can install xvfb and then train agents with the following command:
xvfb-run -a -s "-screen 0 1024x768x24 -ac +extension GLX +render -noreset" python train_rl.py
This functionality assumes a wandb account to load logs from.
In order to evaluate RL agents, we provide notebooks/generate_rl_plots_and_table.ipynb, a notebook which creates plots and values used to evaluate RL agents in the paper. This requires log data in csv form, which first has to be downloaded using notebooks/wandb_to_csv.ipynb.
Additionally, you can evaluate the behaviour of agents by deploying them in environments. For this, use scripts/evaluate_agent.py. This assumes you provide a run-id from wandb, but you can in principle just provide any path to a locally saved model if you slightly modify the code.
To be filled after publication
