Metrics

Submodules

panoptica.metrics.assd module

panoptica.metrics.assd.__surface_distances(reference, prediction, voxelspacing=None, connectivity=1)

The distances between the surface voxel of binary objects in result and their nearest partner surface voxel of a binary object in reference.

panoptica.metrics.assd._average_surface_distance(reference, prediction, voxelspacing=None, connectivity=1)
panoptica.metrics.assd._average_symmetric_surface_distance(reference, prediction, voxelspacing=None, connectivity=1, *args) float

ASSD is computed by computing the average of the bidrectionally computed ASD.

panoptica.metrics.assd._compute_instance_average_symmetric_surface_distance(ref_labels: ndarray, pred_labels: ndarray, ref_instance_idx: int | None = None, pred_instance_idx: int | None = None, voxelspacing=None, connectivity=1)
panoptica.metrics.assd._distance_transform_edt(input_array: ndarray, sampling=None, return_distances=True, return_indices=False)

Computes the Euclidean distance transform and/or feature transform of a binary array.

This function calculates the Euclidean distance transform (EDT) of a binary array, which gives the distance from each non-zero point to the nearest zero point. It can also return the feature transform, which provides indices to the nearest non-zero point.

Parameters:
  • input_array (np.ndarray) – The input binary array where non-zero values are considered foreground.

  • sampling (optional) – A sequence or array that specifies the spacing along each dimension. If provided, scales the distances by the sampling value along each axis.

  • return_distances (bool, optional) – If True, returns the distance transform. Default is True.

  • return_indices (bool, optional) – If True, returns the feature transform with indices to the nearest foreground points. Default is False.

Returns:

If return_distances is True, returns the distance transform as an array. If return_indices is True, returns the feature transform. If both are True, returns a tuple with the distance and feature transforms.

Return type:

np.ndarray or tuple[np.ndarray, …]

Raises:

ValueError – If the input array is empty or has unsupported dimensions.

panoptica.metrics.cldice module

panoptica.metrics.cldice._compute_centerline_dice(ref_labels: ndarray, pred_labels: ndarray, ref_instance_idx: int | None = None, pred_instance_idx: int | None = None) float

Compute the centerline Dice (clDice) coefficient between a specific pair of instances.

Parameters:
  • ref_labels (np.ndarray) – Reference instance labels.

  • pred_labels (np.ndarray) – Prediction instance labels.

  • ref_instance_idx (int) – Index of the reference instance.

  • pred_instance_idx (int) – Index of the prediction instance.

Returns:

clDice coefficient

Return type:

float

panoptica.metrics.cldice._compute_centerline_dice_coefficient(reference: ndarray, prediction: ndarray, *args) float
panoptica.metrics.cldice.cl_score(volume: ndarray, skeleton: ndarray)

Computes the skeleton volume overlap

Parameters:
  • volume (np.ndarray) – volume

  • skeleton (np.ndarray) – skeleton

Returns:

skeleton overlap

Return type:

_type_

panoptica.metrics.dice module

panoptica.metrics.dice._compute_dice_coefficient(reference: ndarray, prediction: ndarray, *args) float

Compute the Dice coefficient between two binary masks.

The Dice coefficient measures the similarity or overlap between two binary masks. It is defined as:

Dice = (2 * intersection) / (area_mask1 + area_mask2)

Parameters:
  • reference (np.ndarray) – Reference binary mask.

  • prediction (np.ndarray) – Prediction binary mask.

Returns:

Dice coefficient between the two binary masks. A value between 0 and 1, where higher values indicate better overlap and similarity between masks.

Return type:

float

panoptica.metrics.dice._compute_instance_volumetric_dice(ref_labels: ndarray, pred_labels: ndarray, ref_instance_idx: int | None = None, pred_instance_idx: int | None = None) float

Compute the Dice coefficient between a specific pair of instances.

The Dice coefficient measures the similarity or overlap between two binary masks representing instances. It is defined as:

Dice = (2 * intersection) / (ref_area + pred_area)

Parameters:
  • ref_labels (np.ndarray) – Reference instance labels.

  • pred_labels (np.ndarray) – Prediction instance labels.

  • ref_instance_idx (int) – Index of the reference instance.

  • pred_instance_idx (int) – Index of the prediction instance.

Returns:

Dice coefficient between the specified instances. A value between 0 and 1, where higher values indicate better overlap and similarity between instances.

Return type:

float

panoptica.metrics.iou module

panoptica.metrics.iou._compute_instance_iou(reference_arr: ndarray, prediction_arr: ndarray, ref_instance_idx: int | None = None, pred_instance_idx: int | None = None) float

Compute Intersection over Union (IoU) between a specific pair of reference and prediction instances.

Parameters:
  • ref_labels (np.ndarray) – Reference instance labels.

  • pred_labels (np.ndarray) – Prediction instance labels.

  • ref_instance_idx (int) – Index of the reference instance.

  • pred_instance_idx (int) – Index of the prediction instance.

Returns:

IoU between the specified instances.

Return type:

float

panoptica.metrics.iou._compute_iou(reference_arr: ndarray, prediction_arr: ndarray, *args) float

Compute Intersection over Union (IoU) between two masks.

Parameters:
  • reference (np.ndarray) – Reference mask.

  • prediction (np.ndarray) – Prediction mask.

Returns:

IoU between the two masks. A value between 0 and 1, where higher values indicate better overlap and similarity between masks.

Return type:

float

panoptica.metrics.metrics module

class panoptica.metrics.metrics.DirectValueMeta(cls, bases, classdict, **kwds)

Bases: EnumMeta

Metaclass that allows for directly getting an enum attribute

class panoptica.metrics.metrics.Evaluation_List_Metric(name_id: Metric, empty_list_std: float | None, value_list: list[float] | None, is_edge_case: bool = False, edge_case_result: float | None = None)

Bases: object

class panoptica.metrics.metrics.Evaluation_Metric(name_id: str, metric_type: MetricType, calc_func: Callable | None, long_name: str | None = None, was_calculated: bool = False, error: bool = False)

Bases: object

This represents a metric in the evaluation derived from other metrics or list metrics (no circular dependancies!)

Parameters:
  • name_id (str) – code-name of this metric, must be same as the member variable of PanopticResult

  • calc_func (Callable) – the function to calculate this metric based on the PanopticResult object

  • long_name (str | None, optional) – A longer descriptive name for printing/logging purposes. Defaults to None.

  • was_calculated (bool, optional) – Whether this metric has been calculated or not. Defaults to False.

  • error (bool, optional) – If true, means the metric could not have been calculated (because dependancies do not exist or have this flag set to True). Defaults to False.

class panoptica.metrics.metrics.Metric(value)

Bases: _Enum_Compare

Enum containing important metrics that must be calculated in the evaluator, can be set for thresholding in matching and evaluation Never call the .value member here, use the properties directly

Returns:

_description_

Return type:

_type_

ASSD = _Metric.ASSD
DSC = _Metric.DSC
IOU = _Metric.IOU
RVD = _Metric.RVD
clDSC = _Metric.clDSC
property decreasing
property increasing
property name
score_beats_threshold(matching_score: float, matching_threshold: float) bool

Calculates whether a score beats a specified threshold

Parameters:
  • matching_score (float) – Metric score

  • matching_threshold (float) – Threshold to compare against

Returns:

True if the matching_score beats the threshold, False otherwise.

Return type:

bool

exception panoptica.metrics.metrics.MetricCouldNotBeComputedException(*args: object)

Bases: Exception

Exception for when a Metric cannot be computed

class panoptica.metrics.metrics.MetricMode(value)

Bases: _Enum_Compare

Different modalities from Metrics

Parameters:

_Enum_Compare (_type_) – _description_

ALL = 1
AVG = 2
MAX = 6
MIN = 5
STD = 4
SUM = 3
class panoptica.metrics.metrics.MetricType(value)

Bases: _Enum_Compare

Different type of metrics

Parameters:

_Enum_Compare (_type_) – _description_

GLOBAL = 3
INSTANCE = 4
MATCHING = 2
NO_PRINT = 1
class panoptica.metrics.metrics._Metric(name: str, long_name: str, decreasing: bool, _metric_function: Callable)

Bases: object

Represents a metric with a name, direction (increasing or decreasing), and a calculation function.

This class provides a framework for defining and calculating metrics, which can be used to evaluate the similarity or performance between reference and prediction arrays. The metric direction indicates whether higher or lower values are better.

name

Short name of the metric.

Type:

str

long_name

Full descriptive name of the metric.

Type:

str

decreasing

If True, lower metric values are better; otherwise, higher values are preferred.

Type:

bool

_metric_function

A callable function that computes the metric between two input arrays.

Type:

Callable

Example

>>> my_metric = _Metric(name="accuracy", long_name="Accuracy", decreasing=False, _metric_function=accuracy_function)
>>> score = my_metric(reference_array, prediction_array)
>>> print(score)
_metric_function: Callable
decreasing: bool
property increasing

Indicates if higher values of the metric are better.

Returns:

True if increasing values are preferred, otherwise False.

Return type:

bool

long_name: str
name: str
score_beats_threshold(matching_score: float, matching_threshold: float) bool

Determines if a matching score meets a specified threshold.

Parameters:
  • matching_score (float) – The score to evaluate.

  • matching_threshold (float) – The threshold value to compare against.

Returns:

True if the score meets the threshold, taking into account the metric’s preferred direction.

Return type:

bool

panoptica.metrics.relative_volume_difference module

panoptica.metrics.relative_volume_difference._compute_instance_relative_volume_difference(ref_labels: ndarray, pred_labels: ndarray, ref_instance_idx: int | None = None, pred_instance_idx: int | None = None) float

Compute the Dice coefficient between a specific pair of instances.

The Dice coefficient measures the similarity or overlap between two binary masks representing instances. It is defined as:

Dice = (2 * intersection) / (ref_area + pred_area)

Parameters:
  • ref_labels (np.ndarray) – Reference instance labels.

  • pred_labels (np.ndarray) – Prediction instance labels.

  • ref_instance_idx (int) – Index of the reference instance.

  • pred_instance_idx (int) – Index of the prediction instance.

Returns:

Dice coefficient between the specified instances. A value between 0 and 1, where higher values indicate better overlap and similarity between instances.

Return type:

float

panoptica.metrics.relative_volume_difference._compute_relative_volume_difference(reference: ndarray, prediction: ndarray, *args) float

Compute the relative volume difference between two binary masks.

The relative volume difference is the predicted volume of an instance in relation to the reference volume (>0 oversegmented, <0 undersegmented)

RVD = ((pred_volume-ref_volume) / ref_volume)

Parameters:
  • reference (np.ndarray) – Reference binary mask.

  • prediction (np.ndarray) – Prediction binary mask.

Returns:

Relative volume Error between the two binary masks. A value of zero means perfect volume match, while >0 means oversegmentation and <0 undersegmentation.

Return type:

float