WordSeqEmbedding.ConfigΒΆ
Component: WordSeqEmbedding
-
class
WordSeqEmbedding.
Config
[source] Bases:
EmbeddingBase.Config
All Attributes (including base classes)
- load_path: Optional[str] =
None
- save_path: Optional[str] =
None
- freeze: bool =
False
- shared_module_key: Optional[str] =
None
- word_embed_dim: int =
100
- embedding_init_strategy: EmbedInitStrategy =
<EmbedInitStrategy.RANDOM: 'random'>
- embedding_init_range: Optional[list[float]] =
None
- embeddding_init_std: Optional[float] =
0.02
- padding_idx: Optional[int] =
None
- lstm: BiLSTM.Config = BiLSTM.Config()
- pretrained_embeddings_path: str =
''
- vocab_size: int =
0
- If pretrained_embeddings_path and vocab_from_pretrained_embeddings are set, only the first vocab_size tokens in the file will be added to the vocab.
- lowercase_tokens: bool =
True
- skip_header: bool =
True
- delimiter: str =
' '
Default JSON
{
"load_path": null,
"save_path": null,
"freeze": false,
"shared_module_key": null,
"word_embed_dim": 100,
"embedding_init_strategy": "random",
"embedding_init_range": null,
"embeddding_init_std": 0.02,
"padding_idx": null,
"lstm": {
"load_path": null,
"save_path": null,
"freeze": false,
"shared_module_key": null,
"dropout": 0.4,
"lstm_dim": 32,
"num_layers": 1,
"bidirectional": true,
"pack_sequence": true,
"disable_sort_in_jit": false
},
"pretrained_embeddings_path": "",
"vocab_size": 0,
"lowercase_tokens": true,
"skip_header": true,
"delimiter": " "
}