Hello, i wanted to try your fork of sadtalker in colab, but I keep getting this error:
--------device----------- cpu Traceback (most recent call last): File "inference.py", line 217, in <module> main(args) File "inference.py", line 43, in main preprocess_model = CropAndExtract(sadtalker_paths, device) File "/content/xtalker/src/utils/preprocess.py", line 49, in __init__ self.propress = Preprocesser(device) File "/content/xtalker/src/utils/croper.py", line 22, in __init__ self.predictor = KeypointExtractor(device) File "/content/xtalker/src/face3d/extract_kp_videos_safe.py", line 28, in __init__ self.detector = init_alignment_model('awing_fan',device=device, model_rootpath=root_path) File "/usr/local/lib/python3.8/dist-packages/facexlib/alignment/__init__.py", line 19, in init_alignment_model model.load_state_dict(torch.load(model_path)['state_dict'], strict=True) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 713, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 930, in _legacy_load result = unpickler.load() File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 876, in persistent_load wrap_storage=restore_location(obj, location), File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 175, in default_restore_location result = fn(storage, location) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 152, in _cuda_deserialize device = validate_cuda_device(location) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 136, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I followed instructions, tried with the int8 branch and the main branch too. I tried making it work for hours but no success with and without GPU. Tried adding map_location=torch.device('cpu') on every instance of torch.load in inference.py.
i changed the installed python version from python 3.10 to 3.8 because it's required by sadtalker. i believe i tried with python 3.10 too but i doubt the problem is with the python version.
Thanks in advance.
Hello, i wanted to try your fork of sadtalker in colab, but I keep getting this error:
--------device----------- cpu Traceback (most recent call last): File "inference.py", line 217, in <module> main(args) File "inference.py", line 43, in main preprocess_model = CropAndExtract(sadtalker_paths, device) File "/content/xtalker/src/utils/preprocess.py", line 49, in __init__ self.propress = Preprocesser(device) File "/content/xtalker/src/utils/croper.py", line 22, in __init__ self.predictor = KeypointExtractor(device) File "/content/xtalker/src/face3d/extract_kp_videos_safe.py", line 28, in __init__ self.detector = init_alignment_model('awing_fan',device=device, model_rootpath=root_path) File "/usr/local/lib/python3.8/dist-packages/facexlib/alignment/__init__.py", line 19, in init_alignment_model model.load_state_dict(torch.load(model_path)['state_dict'], strict=True) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 713, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 930, in _legacy_load result = unpickler.load() File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 876, in persistent_load wrap_storage=restore_location(obj, location), File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 175, in default_restore_location result = fn(storage, location) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 152, in _cuda_deserialize device = validate_cuda_device(location) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 136, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.I followed instructions, tried with the int8 branch and the main branch too. I tried making it work for hours but no success with and without GPU. Tried adding map_location=torch.device('cpu') on every instance of torch.load in inference.py.
i changed the installed python version from python 3.10 to 3.8 because it's required by sadtalker. i believe i tried with python 3.10 too but i doubt the problem is with the python version.
Thanks in advance.