Skip to content
This repository was archived by the owner on Apr 8, 2026. It is now read-only.
This repository was archived by the owner on Apr 8, 2026. It is now read-only.

Different implementation of activation normalization from the one described in the paper. #114

@shuohantao

Description

@shuohantao

In the paper, actnorm is guaranteed to be invertible as long as the learned vector s contains non-zero element. The code implementation doesn't guarantee invertability. I wonder why the actual implementation is different.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions