TaskTrainer.ConfigΒΆ

Component: TaskTrainer

class TaskTrainer.Config[source]

Bases: Trainer.Config

Make mypy happy

All Attributes (including base classes)

epochs: int = 10
early_stop_after: int = 0
max_clip_norm: Optional[float] = None
report_train_metrics: bool = True
target_time_limit_seconds: Optional[int] = None
do_eval: bool = True
load_best_model_after_train: bool = True
num_samples_to_log_progress: int = 1000
num_accumulated_batches: int = 1
num_batches_per_epoch: Optional[int] = None
optimizer: Optimizer.Config = Adam.Config()
scheduler: Optional[Scheduler.Config] = None
sparsifier: Optional[Sparsifier.Config] = None
fp16_args: FP16Optimizer.Config = FP16OptimizerFairseq.Config()
privacy_engine: Optional[PrivacyEngine.Config] = None
use_tensorboard: bool = False

Default JSON

{
    "epochs": 10,
    "early_stop_after": 0,
    "max_clip_norm": null,
    "report_train_metrics": true,
    "target_time_limit_seconds": null,
    "do_eval": true,
    "load_best_model_after_train": true,
    "num_samples_to_log_progress": 1000,
    "num_accumulated_batches": 1,
    "num_batches_per_epoch": null,
    "optimizer": {
        "Adam": {
            "lr": 0.001,
            "weight_decay": 1e-05,
            "eps": 1e-08
        }
    },
    "scheduler": null,
    "sparsifier": null,
    "fp16_args": {
        "FP16OptimizerFairseq": {
            "init_loss_scale": 128,
            "scale_window": null,
            "scale_tolerance": 0.0,
            "threshold_loss_scale": null,
            "min_loss_scale": 0.0001
        }
    },
    "privacy_engine": null,
    "use_tensorboard": false
}