-
Notifications
You must be signed in to change notification settings - Fork 2
Description
File "/root/Projects/WHAC/whac/inference.py", line 234, in main
out = whac.model(inputs_whac, targets, meta_info, 'test')
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 186, in forward
return self.gather(outputs, self.output_device)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 203, in gather
return gather(outputs, output_device, dim=self.dim)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 104, in gather
res = gather_map(outputs)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 95, in gather_map
return type(out)((k, gather_map([d[k] for d in outputs]))
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 95, in
return type(out)((k, gather_map([d[k] for d in outputs]))
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 89, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/root/anaconda3/envs/whac/lib/python3.10/site-packages/torch/nn/parallel/_functions.py", line 56, in forward
assert all(i.device.type != 'cpu' for i in inputs), (
AssertionError: Gather function not implemented for CPU tensors
How to fixed this problem? Thanks~
My Torch and cuda verison is : 2.3.1+cu118