pytext.models.seq_models package

Submodules

pytext.models.seq_models.contextual_intent_slot module

class pytext.models.seq_models.contextual_intent_slot.ContextualIntentSlotModel(default_doc_loss_weight, default_word_loss_weight, *args, **kwargs)[source]

Bases: pytext.models.joint_model.IntentSlotModel

Joint Model for Intent classification and slot tagging with inputs of contextual information (sequence of utterances) and dictionary feature of the last utterance.

Training data should include: doc_label (string): intent classification label of either the sequence of utterances or just the last sentence word_label (string): slot tagging label of the last utterance in the format of start_idx:end_idx:slot_label, multiple slots are separated by a comma text (list of string): sequence of utterances for training dict_feat (dict): a dict of features that contains the feature of each word in the last utterance

Following is an example of raw columns from training data:

doc_label reply-where
word_label 10:20:restaurant_name
text [“dinner at 6?”, “wanna try Tomi Sushi?”]
dict_feat
{“tokenFeatList”: [{“tokenIdx”: 2, “features”: {“poi:eatery”: 0.66}},
{“tokenIdx”: 3, “features”: {“poi:eatery”: 0.66}}]}
arrange_model_inputs(tensor_dict)[source]
classmethod create_embedding(config, tensorizers)[source]
get_export_input_names(tensorizers)[source]
vocab_to_export(tensorizers)[source]

pytext.models.seq_models.seqnn module

class pytext.models.seq_models.seqnn.SeqNNModel(embedding: pytext.models.embeddings.embedding_base.EmbeddingBase, representation: pytext.models.representations.representation_base.RepresentationBase, decoder: pytext.models.decoders.decoder_base.DecoderBase, output_layer: pytext.models.output_layers.output_layer_base.OutputLayerBase)[source]

Bases: pytext.models.doc_model.DocModel

Classification model with sequence of utterances as input. It uses a docnn model (CNN or LSTM) to generate vector representation for each sequence, and then use an LSTM or BLSTM to capture the dynamics and produce labels for each sequence.

arrange_model_inputs(tensor_dict)[source]
class pytext.models.seq_models.seqnn.SeqNNModel_Deprecated(embedding: pytext.models.embeddings.embedding_base.EmbeddingBase, representation: pytext.models.representations.representation_base.RepresentationBase, decoder: pytext.models.decoders.decoder_base.DecoderBase, output_layer: pytext.models.output_layers.output_layer_base.OutputLayerBase)[source]

Bases: pytext.models.model.Model

Classification model with sequence of utterances as input. It uses a docnn model (CNN or LSTM) to generate vector representation for each sequence, and then use an LSTM or BLSTM to capture the dynamics and produce labels for each sequence.

DEPRECATED: Use SeqNNModel

Module contents