pytext.loss package¶
Submodules¶
pytext.loss.loss module¶
-
class
pytext.loss.loss.
AUCPRHingeLoss
(config, weights=None, *args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
,pytext.loss.loss.Loss
area under the precision-recall curve loss, Reference: “Scalable Learning of Non-Decomposable Objectives”, Section 5 TensorFlow Implementation: https://github.com/tensorflow/models/tree/master/research/global_objectives
-
forward
(logits, targets, reduce=True, size_average=True, weights=None)[source]¶ Parameters: - logits – Variable \((N, C)\) where C = number of classes
- targets – Variable \((N)\) where each value is 0 <= targets[i] <= C-1
- weights – Coefficients for the loss. Must be a Tensor of shape [N] or [N, C], where N = batch_size, C = number of classes.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
- reduce (bool, optional) – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
-
-
class
pytext.loss.loss.
BinaryCrossEntropyLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
BinaryCrossEntropyWithLogitsLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
CosineEmbeddingLoss
(config, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
CrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
HingeLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
KLDivergenceBCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
KLDivergenceCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
LabelSmoothedCrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
Loss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.config.component.Component
Base class for loss functions
-
class
pytext.loss.loss.
MAELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean absolute error or L1 loss, for regression tasks.
-
class
pytext.loss.loss.
MSELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean squared error or L2 loss, for regression tasks.
-
class
pytext.loss.loss.
MultiLabelSoftMarginLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
NLLLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
PairwiseRankingLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Given embeddings for a query, positive response and negative response computes pairwise ranking hinge loss
pytext.loss.regularized_loss module¶
-
class
pytext.loss.regularized_loss.
LabelSmoothingLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Label loss with an optional regularizer for smoothing.
-
class
pytext.loss.regularized_loss.
NARSamplewiseSequenceLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularized_loss.NARSequenceLoss
Non-autoregressive sequence loss with sample-wise logging.
-
class
pytext.loss.regularized_loss.
NARSequenceLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Joint loss over labels and length of sequences for non-autoregressive modeling.
-
class
pytext.loss.regularized_loss.
SamplewiseLabelSmoothingLoss
(config, ignore_index=-1)[source]¶ Bases:
pytext.loss.regularized_loss.LabelSmoothingLoss
Label smoothing loss with sample-wise logging.
pytext.loss.regularizer module¶
-
class
pytext.loss.regularizer.
AdaptiveRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
Adaptive variant of UniformRegularizer which learns the mix-in noise distribution.
Learning Better Structured Representations using Low-Rank Adaptive Label Smoothing (Ghoshal+ 2021; https://openreview.net/pdf?id=5NsEIflpbSv)
-
class
pytext.loss.regularizer.
EntropyRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
- Entropy of the predicted distribution. Defined as:
- H[P(Y|X)] = - sum_i P(Y_i|X) * log P(Y_i|X)
-
class
pytext.loss.regularizer.
Regularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Generic regularization function to be added to a surrogate loss (e.g., cross-entropy).
-
class
pytext.loss.regularizer.
UniformRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
- Negative KL between the uniform and predicted distribution.
- Defined as:
- KL(U || P(Y|X)) = - sum_i U_i * log (P(Y_i | X) / U_i)
- = - sum_i U_i * log P(Y_i|X) + H[U] = - (1/n) * sum_i log P(Y_i | X) + H[U]
H[U] does not depend on X, thus it is omitted during optimization.
pytext.loss.structured_loss module¶
-
class
pytext.loss.structured_loss.
CostFunctionType
[source]¶ Bases:
enum.Enum
An enumeration.
-
HAMMING
= 'hamming'¶
-
-
class
pytext.loss.structured_loss.
StructuredLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Generic loss function applied to structured outputs.
-
class
pytext.loss.structured_loss.
StructuredMarginLoss
(config, ignore_index=1, *args, **kwargs)[source]¶ Bases:
pytext.loss.structured_loss.StructuredLoss
Margin-based loss which requires a gold structure Y to score at least cost(Y, Y’) above a hypothesis structure Y’. The cost function used is variable, but should reflect the underlying semantics of the task (e.g., BLEU in machine translation).
-
pytext.loss.structured_loss.
get_cost_fn
(cost_fn_type: pytext.loss.structured_loss.CostFunctionType)[source]¶ Retrieves a cost function corresponding to cost_fn_type.
-
pytext.loss.structured_loss.
hamming_distance
(logits, targets, cost_scale=1.0)[source]¶ Computes Hamming distance (https://en.wikipedia.org/wiki/Hamming_distance), which is defined as the number of positions where two sequences of equal length differ. We apply Hamming distance locally, incrementing non-gold token scores by cost_scale.
` Example: Given targets = [0, 1] and cost_scale = 1.0, we have the following: logits (before) = [[-1.0, 1.0, 2.0], [-2.0, -1.0, 1.0]] logits (after) = [[-1.0, 2.0, 3.0], [-1.0, -1.0, 2.0]] `
Module contents¶
-
class
pytext.loss.
AUCPRHingeLoss
(config, weights=None, *args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
,pytext.loss.loss.Loss
area under the precision-recall curve loss, Reference: “Scalable Learning of Non-Decomposable Objectives”, Section 5 TensorFlow Implementation: https://github.com/tensorflow/models/tree/master/research/global_objectives
-
forward
(logits, targets, reduce=True, size_average=True, weights=None)[source]¶ Parameters: - logits – Variable \((N, C)\) where C = number of classes
- targets – Variable \((N)\) where each value is 0 <= targets[i] <= C-1
- weights – Coefficients for the loss. Must be a Tensor of shape [N] or [N, C], where N = batch_size, C = number of classes.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
- reduce (bool, optional) – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
-
-
class
pytext.loss.
Loss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.config.component.Component
Base class for loss functions
-
class
pytext.loss.
CrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
CosineEmbeddingLoss
(config, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
BinaryCrossEntropyLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
BinaryCrossEntropyWithLogitsLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
HingeLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
MultiLabelSoftMarginLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
KLDivergenceBCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
KLDivergenceCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
MAELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean absolute error or L1 loss, for regression tasks.
-
class
pytext.loss.
MSELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean squared error or L2 loss, for regression tasks.
-
class
pytext.loss.
NLLLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
PairwiseRankingLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Given embeddings for a query, positive response and negative response computes pairwise ranking hinge loss
-
class
pytext.loss.
LabelSmoothedCrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
SourceType
[source]¶ Bases:
enum.Enum
An enumeration.
-
LOGITS
= 'logits'¶
-
LOG_PROBS
= 'log_probs'¶
-
PROBS
= 'probs'¶
-
-
class
pytext.loss.
StructuredLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Generic loss function applied to structured outputs.
-
class
pytext.loss.
StructuredMarginLoss
(config, ignore_index=1, *args, **kwargs)[source]¶ Bases:
pytext.loss.structured_loss.StructuredLoss
Margin-based loss which requires a gold structure Y to score at least cost(Y, Y’) above a hypothesis structure Y’. The cost function used is variable, but should reflect the underlying semantics of the task (e.g., BLEU in machine translation).
-
class
pytext.loss.
LabelSmoothingLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Label loss with an optional regularizer for smoothing.
-
class
pytext.loss.
SamplewiseLabelSmoothingLoss
(config, ignore_index=-1)[source]¶ Bases:
pytext.loss.regularized_loss.LabelSmoothingLoss
Label smoothing loss with sample-wise logging.
-
class
pytext.loss.
NARSequenceLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.loss.Loss
Joint loss over labels and length of sequences for non-autoregressive modeling.
-
class
pytext.loss.
NARSamplewiseSequenceLoss
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularized_loss.NARSequenceLoss
Non-autoregressive sequence loss with sample-wise logging.
-
class
pytext.loss.
UniformRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
- Negative KL between the uniform and predicted distribution.
- Defined as:
- KL(U || P(Y|X)) = - sum_i U_i * log (P(Y_i | X) / U_i)
- = - sum_i U_i * log P(Y_i|X) + H[U] = - (1/n) * sum_i log P(Y_i | X) + H[U]
H[U] does not depend on X, thus it is omitted during optimization.
-
class
pytext.loss.
EntropyRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
- Entropy of the predicted distribution. Defined as:
- H[P(Y|X)] = - sum_i P(Y_i|X) * log P(Y_i|X)
-
class
pytext.loss.
AdaptiveRegularizer
(config, ignore_index=1)[source]¶ Bases:
pytext.loss.regularizer.Regularizer
Adaptive variant of UniformRegularizer which learns the mix-in noise distribution.
Learning Better Structured Representations using Low-Rank Adaptive Label Smoothing (Ghoshal+ 2021; https://openreview.net/pdf?id=5NsEIflpbSv)