-
Notifications
You must be signed in to change notification settings - Fork 21
Description
Hello,
I'm currently using your code from the repository [insert repository name] with my own dataset, but I'm encountering an IndexError during the training phase. Below is the traceback I received:
[ Fri Aug 16 10:18:36 2024 ] Parameters:
{'work_dir': './work_dir/baseline_res18/', 'config': './configs/baseline.yaml', 'random_fix': True, 'device': '3', 'phase': 'train', 'save_interval': 5, 'random_seed': 0, 'eval_interval': 1, 'print_log': True, 'log_interval': 50, 'evaluate_tool': 'sclite', 'feeder': 'dataset.dataloader_video.BaseFeeder', 'dataset': 'QSLR2024', 'dataset_info': {'dataset_root': './dataset/QSLR2024', 'dict_path': './preprocess/QSLR2024/gloss_dict.npy', 'evaluation_dir': './evaluation/slr_eval', 'evaluation_prefix': 'QSLR2024-groundtruth'}, 'num_worker': 10, 'feeder_args': {'mode': 'test', 'datatype': 'video', 'num_gloss': -1, 'drop_ratio': 1.0, 'prefix': './dataset/QSLR2024', 'transform_mode': False}, 'model': 'slr_network.SLRModel', 'model_args': {'num_classes': 65, 'c2d_type': 'resnet18', 'conv_type': 2, 'use_bn': 1, 'share_classifier': False, 'weight_norm': False}, 'load_weights': None, 'load_checkpoints': None, 'decode_mode': 'beam', 'ignore_weights': [], 'batch_size': 2, 'test_batch_size': 8, 'loss_weights': {'SeqCTC': 1.0}, 'optimizer_args': {'optimizer': 'Adam', 'base_lr': 0.0001, 'step': [20, 35], 'learning_ratio': 1, 'weight_decay': 0.0001, 'start_epoch': 0, 'nesterov': False}, 'num_epoch': 30}
0%| | 0/162 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/raid/data/m33221012/VAC_CSLR_QSLR/main.py", line 213, in
processor.start()
File "/raid/data/m33221012/VAC_CSLR_QSLR/main.py", line 44, in start
seq_train(self.data_loader['train'], self.model, self.optimizer,
File "/raid/data/m33221012/VAC_CSLR_QSLR/seq_scripts.py", line 18, in seq_train
for batch_idx, data in enumerate(tqdm(loader)):
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/tqdm/std.py", line 1181, in iter
for obj in iterable:
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in next
data = self._next_data()
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1344, in _next_data
return self._process_data(data)
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1370, in _process_data
data.reraise()
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/_utils.py", line 706, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/m33221012/miniconda3/envs/py31012/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/raid/data/m33221012/VAC_CSLR_QSLR/dataset/dataloader_video.py", line 48, in getitem
input_data, label = self.normalize(input_data, label)
File "/raid/data/m33221012/VAC_CSLR_QSLR/dataset/dataloader_video.py", line 80, in normalize
video, label = self.data_aug(video, label, file_id)
File "/raid/data/m33221012/VAC_CSLR_QSLR/utils/video_augmentation.py", line 24, in call
image = t(image)
File "/raid/data/m33221012/VAC_CSLR_QSLR/utils/video_augmentation.py", line 119, in call
if isinstance(clip[0], np.ndarray):
IndexError: list index out of range
It seems the issue occurs within the video_augmentation.py script when accessing clip[0]. I suspect it might be related to the data augmentation process or the input data structure.
Since I'm using my own dataset, could you please let me know what specific adjustments or preprocessing steps are necessary to ensure compatibility with your code? Additionally, is there a possibility that this error is related to hardware settings, such as GPU configuration or memory limitations?
Any advice on how to resolve this error and properly integrate my dataset would be greatly appreciated.
Thank you in advance for your help!