pytext.torchscript.seq2seq package¶
Submodules¶
pytext.torchscript.seq2seq.beam_decode module¶
-
class
pytext.torchscript.seq2seq.beam_decode.
BeamDecode
(eos_token_id, length_penalty, nbest, beam_size, stop_at_eos)[source]¶ Bases:
torch.nn.modules.module.Module
Decodes the output of Beam Search to get the top hypotheses
-
forward
(beam_tokens: torch.Tensor, beam_scores: torch.Tensor, token_weights: torch.Tensor, beam_prev_indices: torch.Tensor, num_steps: int) → List[Tuple[torch.Tensor, float, List[float], torch.Tensor, torch.Tensor]][source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
pytext.torchscript.seq2seq.beam_search module¶
-
class
pytext.torchscript.seq2seq.beam_search.
BeamSearch
(model_list, tgt_dict_eos, beam_size: int = 2, quantize: bool = False, record_attention: bool = False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(src_tokens: torch.Tensor, src_lengths: torch.Tensor, num_steps: int, dict_feat: Optional[Tuple[torch.Tensor, torch.Tensor, torch.Tensor]] = None, contextual_token_embedding: Optional[torch.Tensor] = None)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
pytext.torchscript.seq2seq.decoder module¶
-
class
pytext.torchscript.seq2seq.decoder.
DecoderBatchedStepEnsemble
(models, beam_size, record_attention=False)[source]¶ Bases:
torch.nn.modules.module.Module
This method should have a common interface such that it can be called after the encoder as well as after the decoder
-
beam_search_aggregate_topk
(log_probs_per_model: List[torch.Tensor], attn_weights_per_model: List[torch.Tensor], prev_scores: torch.Tensor, beam_size: int, record_attention: bool)[source]¶
-
forward
(prev_tokens: torch.Tensor, prev_scores: torch.Tensor, timestep: int, decoder_ips: List[Dict[str, torch.Tensor]]) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, List[Dict[str, torch.Tensor]]][source]¶ Decoder step inputs correspond one-to-one to encoder outputs. HOWEVER: after the first step, encoder outputs (i.e, the first len(self.models) elements of inputs) must be tiled k (beam size) times on the batch dimension (axis 1).
-
pytext.torchscript.seq2seq.encoder module¶
-
class
pytext.torchscript.seq2seq.encoder.
EncoderEnsemble
(models, beam_size)[source]¶ Bases:
torch.nn.modules.module.Module
This class will call the encoders from all the models in the ensemble. It will process the encoder output to prepare input for each decoder step input
-
forward
(src_tokens: torch.Tensor, src_lengths: torch.Tensor, dict_feat: Optional[Tuple[torch.Tensor, torch.Tensor, torch.Tensor]] = None, contextual_token_embedding: Optional[torch.Tensor] = None) → List[Dict[str, torch.Tensor]][source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
pytext.torchscript.seq2seq.export_model module¶
-
class
pytext.torchscript.seq2seq.export_model.
Seq2SeqJIT
(src_dict, tgt_dict, sequence_generator, filter_eos_bos, copy_unk_token=False, dictfeat_dict=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(src_tokens: List[str], dict_feat: Optional[Tuple[List[str], List[float], List[int]]] = None, contextual_token_embedding: Optional[List[float]] = None) → List[Tuple[List[str], float, List[float]]][source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
prepare_generator_inputs
(word_ids: List[int], dict_feat: Optional[Tuple[List[str], List[float], List[int]]] = None, contextual_token_embedding: Optional[List[float]] = None) → Tuple[torch.Tensor, Optional[Tuple[torch.Tensor, torch.Tensor, torch.Tensor]], Optional[torch.Tensor], torch.Tensor][source]¶
-
pytext.torchscript.seq2seq.scripted_seq2seq_generator module¶
-
class
pytext.torchscript.seq2seq.scripted_seq2seq_generator.
ScriptedSequenceGenerator
(models, trg_dict_eos, config)[source]¶ Bases:
pytext.models.module.Module
-
forward
(src_tokens: torch.Tensor, dict_feat: Optional[Tuple[torch.Tensor, torch.Tensor, torch.Tensor]], contextual_token_embedding: Optional[torch.Tensor], src_lengths: torch.Tensor) → List[Tuple[torch.Tensor, float, List[float], torch.Tensor, torch.Tensor]][source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-