Skip to content

Latest commit

 

History

History
10 lines (6 loc) · 478 Bytes

File metadata and controls

10 lines (6 loc) · 478 Bytes
description An NLP tokenization algorithm that is a trainable layer for neural networks.

Introduction

This is the documentation for a package about an experimental "tokenization layer", a tokenization algorithm that is a neural network layer, training as part of a model trying to solve some NLP task, to make tokens that are best for the task.

However note, this is simply a concept, and in it's current state, this layer should not be used in any real tasks.