Skip to content

cgao-comp/HRG-SSA

Repository files navigation

HRG-SSA

The code related to the paper below: Hongru Ji, Xianghua Li, Mingxin Li, Meng Zhao, Chao Gao, Hybrid relational graphs with sentiment-laden semantic alignment for multimodal emotion recognition in conversation, The 34th International Joint Conference on Artificial Intelligence (IJCAI25), 2025, 2973-2981

Pytorch implementation for the paper: [Hybrid Relational Graphs with Sentiment-laden Semantic Alignment for Multimodal Emotion Recognition in Conversation]

Framework

Illustration of HRG-SSA framework

Requirements

pip install -r requirements.txt

Dataset

The raw data can be found at IEMOCAP and MELD.

In our paper, we use pre-extracted features. The multimodal features are available at IEMOCAP and MELD.

Results on IEMOCAP and MELD

Results on IEMOCAP and MELD

Pretrained T5 Download

Before starting, make sure to download the parameters of the T5-base pre-trained model into the pretrained_model directory.

Training examples

The implementation results may vary with training machines and random seeds.

To train on IEMOCAP:

python main.py -backbone ./pretrained_model -run_type train -dataset iemocap -use_gat -window_size 8 -gat 1 -emotion_first -use_video_mode -use_audio_mode

To train on MELD:

python main.py -backbone ./pretrained_model -run_type train -dataset meld -use_gat -emotion_first -use_video_mode -use_audio_mode

Predict and Checkpoints

We provide the pre-trained checkpoint on IEMOCAP at here, and the checkpoint on MELD at here.

To predict on IEMOCAP:

python main.py -run_type predict -ckpt ./iemocap-best-model/ckpt -output predict_real.json -dataset iemocap -test_batch_size=64

To predict on MELD:

python main.py -run_type predict -ckpt ./meld-best-model/ckpt -output predict_real.json -dataset meld -test_batch_size=64

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages