Skip to content

Latest commit

 

History

History
82 lines (57 loc) · 2.88 KB

File metadata and controls

82 lines (57 loc) · 2.88 KB

Experiments (Models, Scripts, Datasets)

See also:

setup

Install dependencies:

cd experiments/
pip install virtualenv
virtualenv .venv
. .venv/bin/activate

# install poetry project (defined by pyproject.toml)
pip install poetry
poetry install

# if your computer has a GPU and you want to run GPU-dependendent experiments:
poetry install -E gpu

# copy template config file for secrets and project config
#   (see experiments/config.py for more info)
cp .env.sample .env
# ^now edit the .env file as needed (e.g. add your OpenAI API key)
Show Snellius server specific directions Note for the commands below, if you're not running on a [slurm server](https://slurm.schedmd.com/overview.html) then use `bash` in place of `sbatch`

Disclaimer: the conda environment is now deprecated in favor of poetry (as shown above). The finetuning job, tuner.job, uses this approach.

# create conda environment
# (if already existing, the environment is updated to be consistent with ./environment.yml)
sbatch jobs/install_env.yml

# now you can activate the conda environment:
source activate thesis
# or if on slurm:
source activate_env.sh

# not currently working:
# launch jupyter notebook server (useful on slurm)
sbatch jobs/launch_jupyter.job

usage

Note: you can replace the "v4" path in the example commands below with a path to a folder containing your own smart_goals.csv

# generate synthetic smart goals (into desired directory)
mkdir -p ../datasets/synthetic_smart/v4
./synthetic_smart.py -o ./data/synthetic_smart/v4/ --sample-size 50 -m gpt-4-0125-preview

# generate feedback on smart goals (creates/modifies feedback.xlsx in input folder each time you call)
#   you can replace 'example' with 'v4' below
./feedback.py -i ./data/synthetic_smart/v4/ -m gpt-4-0125-preview
#   do the same with gpt3
./feedback.py -i ./data/synthetic_smart/v4/ -m gpt-3.5-turbo-0125

# now benchmark generated feedback.xlsx, comparing models:
./benchmark.py -i data/synthetic_smart/v4/ -m gpt-4-0125-preview

For the other experiments with running models locally, you may first need to run huggingface-cli login and enter a token from your hugging face account.

Misc References