A collection of evaluation metrics that can be used to evaluate the performance of a trained model on the Cerebras Wafer Scale Cluster.
AccuracyMetric | PerplexityMetric |
DiceCoefficientMetric | MeanIOUMetric |
FBetaScoreMetric |
Metric**
(*args, **kwargs)**_registry_
** = **#
_property_ num_updates_
: int#
Returns the number of times the metric was updated.
reset
()update
(*args, _kwargs_)**[source]#compute
()[source]#register_state
(name, tensor, persistent=False)[source]#forward
(*args, _kwargs_)**[source]#
Updates and computes the metric value.
AccuracyMetric
(*args, _kwargs_)**[source]#
Computes the accuracy of the model’s predictions
Parameters: name – Name of the metric
Constructs a Metric instance.
Parameters: name – The name of the metric. This is used to reference the metric and does not have to be unique.
reset
()[source]#
update
(labels, predictions, weights=None, dtype=None)[source]#
compute
()[source]#
PerplexityMetric
(*args, _kwargs_)**[source]#
Computes the perplexity of the model’s predictions
Parameters: name – Name of the metric
Constructs a Metric instance.
Parameters: name – The name of the metric. This is used to reference the metric and does not have to be unique.
reset
()[source]#
update
(labels, loss, weights=None, dtype=None)[source]#
compute
()[source]#
DiceCoefficientMetric
(*args, _kwargs_)**[source]#
Dice Coefficient is a common evaluation metric for semantic image segmentation.
Dice Coefficient is defined as follows: Dice = 2 * true_positive / (2 * true_positive + false_positive + false_negative).
The predictions are accumulated in a confusion matrix, weighted by weights, and dice coefficient is then calculated from it.
Parameters:
reset
()[source]#
update
(labels, predictions, weights=None, dtype=None)[source]#
Updates the dice coefficient metric.
Parameters:
compute
()[source]#
MeanIOUMetric
(*args, _kwargs_)**[source]#
Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. iou is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by weights, and mIOU is then calculated from it.
For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_iou.
If weights is None, weights default to 1. Use weights of 0 to mask values.
Parameters:
reset
()[source]#
update
(labels, predictions, weights=None, dtype=None)[source]#
Updates the mean IOU metric.
Parameters:
compute
()[source]#
FBetaScoreMetric
(*args, _kwargs_)**[source]#
Calculates F Score from labels and predictions.
fbeta = (1 + beta^2) * (precision * recall) / ((beta^2 * precision) + recall)
Where beta is some positive real factor. :param num_classes: Number of classes. :param beta: Beta coefficient in the F measure. :param average_type: Defines the reduction that is applied. Should be one
of the following: - ‘micro’ [default]: Calculate the metric globally, across all
samples and classes.
reset
()[source]#
update
(labels, predictions, dtype=None)[*source]#
compute
()[source]#
get_all_metrics
()[source]#
Get all registered metrics.