This repository contains the data processing and analysis pipeline used to study dynamic social gaze behavior and associated neural activity in primates. It includes tools to detect behavioral events (e.g., fixations, saccades), extract neural responses aligned to these events, compute inter-agent behavioral metrics, and analyze population neural dynamics.
We focus on understanding how social gaze behavior unfolds over time between interacting agents and how neural signals track these interactions. Our pipeline allows:
- Detection of fixations and saccades
- Classification of gaze events toward/away from faces or objects
- Extraction of peri-event neural responses (PSTHs)
- Dimensionality reduction (PCA) of neural population activity
- Calculation of cross-correlations and fixation probabilities between agents
git clone git@github.com:gprabaha/issgep.git
cd issgep
conda env create -f environment.yml
conda activate issgeppython scripts/behav_analysis/01_fixation_detection.py --session 20230718 --run 1 --agent m1The repository follows a modular structure:
src/socialgaze/: Core library (functions, classes, utils)scripts/: Analysis scripts organized by topic (e.g., behavior, neural, modeling)jobs/: SLURM job scripts and array generators for HPC processingdata/: Raw and processed data filesoutputs/: Saved results (e.g., cross-correlations, PCA results)
- Place utility functions in
src/socialgaze/utils/when they are general-purpose. - Use clear, testable inputs and outputs — avoid using or modifying global state.
- If function logic is tightly coupled to a module (e.g., fixations), place it in the relevant feature file (e.g.,
fixation_utils.py).
# Example: utils/fixation_utils.py
def get_fixation_duration(fixation):
return fixation["end_time"] - fixation["start_time"]-
Add new classes to
src/socialgaze/features/orsrc/socialgaze/data/depending on their role. -
Inherit from
BaseConfigfor config-aware tools, or from a relevant parent class likeFixationDetector. -
Each class should have:
__init__to load config and inputsrun()orexecute()method to launch its logic- Optional
save()andload()methods if state needs to be persisted
class NewAnalysisTool(BaseConfig):
def __init__(self, config_path):
super().__init__(config_path)
self.results = None
def run(self):
self.results = ... # Perform core logic
def save(self, path):
with open(path, "wb") as f:
pickle.dump(self.results, f)- All config classes live in
src/socialgaze/config/and inherit fromBaseConfig. - Create a new file like
new_feature_config.pyand define your config class there. - Store default values and control flags in the constructor.
class NewFeatureConfig(BaseConfig):
def __init__(self, config_path=None):
super().__init__(config_path)
self.enable_caching = True
self.window_size = 300-
Scripts go into the relevant subfolder of
scripts/- Behavioral analysis →
scripts/behav_analysis/ - Neural analysis →
scripts/neural_analysis/ - Modeling →
scripts/modeling/
- Behavioral analysis →
-
Scripts should:
- Load the appropriate config and data
- Call the appropriate feature class or function
- Save output in
data/processed/oroutputs/ - Be executable via command-line arguments using
argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--session", type=str)
parser.add_argument("--run", type=int)
args = parser.parse_args()
config = SomeConfig()
tool = SomeFeatureClass(config)
tool.run(session=args.session, run=args.run)- All major outputs are stored under
data/processed/andoutputs/ - Binary vectors, spike data, and fixation events are serialized using
pickle - Intermediate outputs (e.g., job-wise temp files) are stored in
data/processed/temp/ - To rerun any step, delete its corresponding output file or set
remake=Truein config
To run jobs on SLURM, use the job scripts in jobs/scripts/. Each analysis stage has a paired job script and job array file. Example:
sbatch jobs/scripts/run_fixation_job.sh