Skip to content

Conversation

@SolomidHero
Copy link

@SolomidHero SolomidHero commented Nov 15, 2020

Hi! I experienced some troubles with inference based on model inner operations:

It seems like for pytorch 1.7.0, torch.nn.utils.rnn.pack_padded_sequence's src_length must be in cpu, even if we're using cuda: pytorch/pytorch#43227

I also tried this command in pytorch 1.6.0 to check for backward compatibility and there it works fine, both with and without .cpu()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant