RoBERTaEncoder.ConfigΒΆ
Component: RoBERTaEncoder
-
class
RoBERTaEncoder.
Config
[source] Bases:
RoBERTaEncoderBase.Config
All Attributes (including base classes)
- load_path: Optional[str] =
None
- save_path: Optional[str] =
None
- freeze: bool =
False
- shared_module_key: Optional[str] =
None
- output_dropout: float =
0.4
- embedding_dim: int =
768
- pooling: PoolingMethod =
<PoolingMethod.CLS_TOKEN: 'cls_token'>
- export: bool =
False
- vocab_size: int =
50265
- num_encoder_layers: int =
12
- num_attention_heads: int =
12
- model_path: str =
'manifold://pytext_training/tree/static/models/roberta_base_torch.pt'
- is_finetuned: bool =
False
Default JSON
{
"load_path": null,
"save_path": null,
"freeze": false,
"shared_module_key": null,
"output_dropout": 0.4,
"embedding_dim": 768,
"pooling": "cls_token",
"export": false,
"vocab_size": 50265,
"num_encoder_layers": 12,
"num_attention_heads": 12,
"model_path": "manifold://pytext_training/tree/static/models/roberta_base_torch.pt",
"is_finetuned": false
}