Skip to content

Classification loss calculated incorrectly #1

@luisbro

Description

@luisbro

Hello, I am trying to fine-tune alignn for predicting a custom property and used the classification example for that. However, even in the example notebook, it doesn't seem to learn anything and predicts only label zero with a loss that is negative. After looking into it, I think the issue stems from the output of the alignn atomwise model, which outputs a single sigmoid value. The sigmoid value is then used to calculate a NLLLoss (expecting a list of log softmax for each target), instead of something like a BCELoss.

It seems like changing this

if self.classification:
self.fc = nn.Linear(config.hidden_features, 1)
self.softmax = nn.Sigmoid()
# self.softmax = nn.LogSoftmax(dim=1)

to something like

        if self.classification:
            self.fc = nn.Linear(config.hidden_features, 2)
            # self.softmax = nn.Sigmoid()
            self.softmax = nn.LogSoftmax(dim=-1)

should work and is essentially what was changed in this commit. A small difference is the dim=-1 which avoids another error.

I'm not sure where else this change could affect something, so I'd leave it here as a suggestion and help for anyone else running into this for now. Or is there even another way to fine-tune alignn for property predictions, that is more recent, explaining why this bug hasn't been a problem until now?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions