loss
sorix.nn.loss ¶
MSELoss ¶
Computes the Mean Squared Error loss between the prediction and the target.
BCEWithLogitsLoss ¶
This loss combines a Sigmoid layer and the BCELoss in one single class. More numerically stable than using a plain Sigmoid followed by a BCELoss.
CrossEntropyLoss ¶
Computes the cross entropy loss between input and target.
This criterion is useful when training a classification problem with C classes.
If provided, the optional argument weight should be a 1D Tensor assigning
weight to each of the classes. This is particularly useful for unbalanced
training sets.
The input is expected to contain raw, unnormalized scores for each class. y_pred has to be a Tensor of size (minibatch, C).
The targets are expected to be class indices in the range [0, C-1] or one-hot encoded values.
The loss can be described as: L = - (1 / sum(w_yi)) * sum(w_yi * log(exp(x_i, yi) / sum(exp(x_i, j))))
Attributes:
-
weight(Optional[Tensor]) –A manual rescaling weight given to each class. If given, has to be a Tensor of size C.
-
one_hot(bool) –Whether the target labels are one-hot encoded.
Initializes the CrossEntropyLoss.
Parameters:
-
weight(Optional[Tensor], default:None) –A manual rescaling weight given to each class. If given, has to be a Tensor of size C. Defaults to None.
-
one_hot(bool, default:False) –Whether the target is one-hot encoded. Defaults to False.