Semantic parsing with sequence-to-sequence models

Introduction

PyText provides an encoder-decoder framework that is suitable for any task that requires mapping a sequence of input tokens to a sequence of output tokens. The default implementation is based on recurrent neural networks (RNNs), which have been shown to be unreasonably effective at sequence processing tasks. The default implementation includes three major components

  1. A bidirectional LSTM sequence encoder
  2. An LSTM sequence decoder
  3. A sequence generator that supports incremental decoding and beam search

All of these components are Torchscript-friendly, so that the trained model can be exported directly as-is. Following the general design of PyText, each of these components may be customized via their respective config objects or replaced entirely by custom components.