pytext.loss package¶
Submodules¶
pytext.loss.loss module¶
-
class
pytext.loss.loss.
AUCPRHingeLoss
(config, weights=None, *args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
,pytext.loss.loss.Loss
area under the precision-recall curve loss, Reference: “Scalable Learning of Non-Decomposable Objectives”, Section 5 TensorFlow Implementation: https://github.com/tensorflow/models/tree/master/research/global_objectives
-
forward
(logits, targets, reduce=True, size_average=True, weights=None)[source]¶ Parameters: - logits – Variable \((N, C)\) where C = number of classes
- targets – Variable \((N)\) where each value is 0 <= targets[i] <= C-1
- weights – Coefficients for the loss. Must be a Tensor of shape [N] or [N, C], where N = batch_size, C = number of classes.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
- reduce (bool, optional) – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
-
-
class
pytext.loss.loss.
BinaryCrossEntropyLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
CosineEmbeddingLoss
(config, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
CrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
KLDivergenceBCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
KLDivergenceCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
LabelSmoothedCrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
Loss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.config.component.Component
Base class for loss functions
-
class
pytext.loss.loss.
MAELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean absolute error or L1 loss, for regression tasks.
-
class
pytext.loss.loss.
MSELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean squared error or L2 loss, for regression tasks.
-
class
pytext.loss.loss.
MultiLabelSoftMarginLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
NLLLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.loss.
PairwiseRankingLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Given embeddings for a query, positive response and negative response computes pairwise ranking hinge loss
Module contents¶
-
class
pytext.loss.
AUCPRHingeLoss
(config, weights=None, *args, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
,pytext.loss.loss.Loss
area under the precision-recall curve loss, Reference: “Scalable Learning of Non-Decomposable Objectives”, Section 5 TensorFlow Implementation: https://github.com/tensorflow/models/tree/master/research/global_objectives
-
forward
(logits, targets, reduce=True, size_average=True, weights=None)[source]¶ Parameters: - logits – Variable \((N, C)\) where C = number of classes
- targets – Variable \((N)\) where each value is 0 <= targets[i] <= C-1
- weights – Coefficients for the loss. Must be a Tensor of shape [N] or [N, C], where N = batch_size, C = number of classes.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
- reduce (bool, optional) – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
-
-
class
pytext.loss.
Loss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.config.component.Component
Base class for loss functions
-
class
pytext.loss.
CrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
CosineEmbeddingLoss
(config, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
BinaryCrossEntropyLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
MultiLabelSoftMarginLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
KLDivergenceBCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
KLDivergenceCELoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
MAELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean absolute error or L1 loss, for regression tasks.
-
class
pytext.loss.
MSELoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Mean squared error or L2 loss, for regression tasks.
-
class
pytext.loss.
NLLLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
-
class
pytext.loss.
PairwiseRankingLoss
(config=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss
Given embeddings for a query, positive response and negative response computes pairwise ranking hinge loss
-
class
pytext.loss.
LabelSmoothedCrossEntropyLoss
(config, ignore_index=-100, weight=None, *args, **kwargs)[source]¶ Bases:
pytext.loss.loss.Loss