pytext.models.decoders package

Submodules

pytext.models.decoders.decoder_base module

class pytext.models.decoders.decoder_base.DecoderBase(config: pytext.config.pytext_config.ConfigBase)[source]

Bases: pytext.models.module.Module

Base class for all decoder modules.

Parameters:config (ConfigBase) – Configuration object.
in_dim

Dimension of input Tensor passed to the decoder.

Type:int
out_dim

Dimension of output Tensor produced by the decoder.

Type:int
forward(*input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder()[source]

Returns the decoder module.

get_in_dim() → int[source]

Returns the dimension of the input Tensor that the decoder accepts.

get_out_dim() → int[source]

Returns the dimension of the input Tensor that the decoder emits.

pytext.models.decoders.intent_slot_model_decoder module

class pytext.models.decoders.intent_slot_model_decoder.IntentSlotModelDecoder(config: pytext.models.decoders.intent_slot_model_decoder.IntentSlotModelDecoder.Config, in_dim_doc: int, in_dim_word: int, out_dim_doc: int, out_dim_word: int)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

IntentSlotModelDecoder implements the decoder layer for intent-slot models. Intent-slot models jointly predict intent and slots from an utterance. At the core these models learn to jointly perform document classification and word tagging tasks.

IntentSlotModelDecoder accepts arguments for decoding both document
classification and word tagging tasks, namely, in_dim_doc and in_dim_word.
Parameters:
  • config (type) – Configuration object of type IntentSlotModelDecoder.Config.
  • in_dim_doc (type) – Dimension of input Tensor for projecting document
  • representation.
  • in_dim_word (type) – Dimension of input Tensor for projecting word
  • representation.
  • out_dim_doc (type) – Dimension of projected output Tensor for document
  • classification.
  • out_dim_word (type) – Dimension of projected output Tensor for word tagging.
use_doc_probs_in_word

Whether to use intent probabilities for

Type:bool
predicting slots.
doc_decoder

Document/intent decoder module.

Type:type
word_decoder

Word/slot decoder module.

Type:type
forward(x_d: torch.Tensor, x_w: torch.Tensor, dense: Optional[torch.Tensor] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the document and word decoder modules.

pytext.models.decoders.mlp_decoder module

class pytext.models.decoders.mlp_decoder.MLPDecoder(config: pytext.models.decoders.mlp_decoder.MLPDecoder.Config, in_dim: int, out_dim: int = 0)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

MLPDecoder implements a fully connected network and uses ReLU as the activation function. The module projects an input tensor to out_dim.

Parameters:
  • config (Config) – Configuration object of type MLPDecoder.Config.
  • in_dim (int) – Dimension of input Tensor passed to MLP.
  • out_dim (int) – Dimension of output Tensor produced by MLP. Defaults to 0.
mlp

Module that implements the MLP.

Type:type
out_dim

Dimension of the output of this module.

Type:type
hidden_dims

Dimensions of the outputs of hidden layers.

Type:List[int]
forward(*input) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the MLP module that is used as a decoder.

pytext.models.decoders.mlp_decoder_query_response module

class pytext.models.decoders.mlp_decoder_query_response.MLPDecoderQueryResponse(config: pytext.models.decoders.mlp_decoder_query_response.MLPDecoderQueryResponse.Config, from_dim: int, to_dim: int)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

Implements a ‘two-tower’ MLP: one for query and one for response Used in search pairwise ranking: both pos_response and neg_response use the response-MLP

forward(*x) → List[torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the decoder module.

static get_mlp(from_dim: int, to_dim: int, hidden_dims: List[int])[source]

pytext.models.decoders.mlp_decoder_two_tower module

class pytext.models.decoders.mlp_decoder_two_tower.ExportType[source]

Bases: enum.Enum

An enumeration.

LEFT = 'LEFT'
NONE = 'NONE'
RIGHT = 'RIGHT'
class pytext.models.decoders.mlp_decoder_two_tower.MLPDecoderTwoTower(config: pytext.models.decoders.mlp_decoder_two_tower.MLPDecoderTwoTower.Config, right_dim: int, left_dim: int, to_dim: int, export_type=<ExportType.NONE: 'NONE'>)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

Implements a ‘two-tower’ MLPDecoder: one for left and one for right

forward(*x) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the decoder module.

static get_mlp(from_dim: int, to_dim: int, hidden_dims: List[int], layer_norm: bool, dropout: float, export_embedding: bool = False)[source]

pytext.models.decoders.multilabel_decoder module

class pytext.models.decoders.multilabel_decoder.MultiLabelDecoder(config: pytext.models.decoders.multilabel_decoder.MultiLabelDecoder.Config, in_dim: int, output_dim: Dict[str, int], label_names: List[str])[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

Implements a ‘n-tower’ MLP: one for each of the multi labels Used in USM/EA: the user satisfaction modeling, pTSR prediction and Error Attribution are all 3 label sets that need predicting.

forward(*input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the decoder module.

static get_mlp(in_dim: int, out_dim: int, hidden_dims: List[int])[source]

Module contents

class pytext.models.decoders.DecoderBase(config: pytext.config.pytext_config.ConfigBase)[source]

Bases: pytext.models.module.Module

Base class for all decoder modules.

Parameters:config (ConfigBase) – Configuration object.
in_dim

Dimension of input Tensor passed to the decoder.

Type:int
out_dim

Dimension of output Tensor produced by the decoder.

Type:int
forward(*input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder()[source]

Returns the decoder module.

get_in_dim() → int[source]

Returns the dimension of the input Tensor that the decoder accepts.

get_out_dim() → int[source]

Returns the dimension of the input Tensor that the decoder emits.

class pytext.models.decoders.MLPDecoder(config: pytext.models.decoders.mlp_decoder.MLPDecoder.Config, in_dim: int, out_dim: int = 0)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

MLPDecoder implements a fully connected network and uses ReLU as the activation function. The module projects an input tensor to out_dim.

Parameters:
  • config (Config) – Configuration object of type MLPDecoder.Config.
  • in_dim (int) – Dimension of input Tensor passed to MLP.
  • out_dim (int) – Dimension of output Tensor produced by MLP. Defaults to 0.
mlp

Module that implements the MLP.

Type:type
out_dim

Dimension of the output of this module.

Type:type
hidden_dims

Dimensions of the outputs of hidden layers.

Type:List[int]
forward(*input) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the MLP module that is used as a decoder.

class pytext.models.decoders.IntentSlotModelDecoder(config: pytext.models.decoders.intent_slot_model_decoder.IntentSlotModelDecoder.Config, in_dim_doc: int, in_dim_word: int, out_dim_doc: int, out_dim_word: int)[source]

Bases: pytext.models.decoders.decoder_base.DecoderBase

IntentSlotModelDecoder implements the decoder layer for intent-slot models. Intent-slot models jointly predict intent and slots from an utterance. At the core these models learn to jointly perform document classification and word tagging tasks.

IntentSlotModelDecoder accepts arguments for decoding both document
classification and word tagging tasks, namely, in_dim_doc and in_dim_word.
Parameters:
  • config (type) – Configuration object of type IntentSlotModelDecoder.Config.
  • in_dim_doc (type) – Dimension of input Tensor for projecting document
  • representation.
  • in_dim_word (type) – Dimension of input Tensor for projecting word
  • representation.
  • out_dim_doc (type) – Dimension of projected output Tensor for document
  • classification.
  • out_dim_word (type) – Dimension of projected output Tensor for word tagging.
use_doc_probs_in_word

Whether to use intent probabilities for

Type:bool
predicting slots.
doc_decoder

Document/intent decoder module.

Type:type
word_decoder

Word/slot decoder module.

Type:type
forward(x_d: torch.Tensor, x_w: torch.Tensor, dense: Optional[torch.Tensor] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_decoder() → List[torch.nn.modules.module.Module][source]

Returns the document and word decoder modules.