Panoptica¶
Submodules¶
panoptica.instance_approximator module¶
- class panoptica.instance_approximator.ConnectedComponentsInstanceApproximator(cca_backend: CCABackend | None = None)¶
Bases:
InstanceApproximator
Instance approximator using connected components algorithm for panoptic segmentation evaluation.
- cca_backend¶
The connected components algorithm backend.
- Type:
- __init__(self, cca_backend
CCABackend) -> None: Initialize the ConnectedComponentsInstanceApproximator.
- _approximate_instances(self, semantic_pair
SemanticPair, **kwargs) -> UnmatchedInstancePair: Approximate instances using the connected components algorithm.
Example: >>> cca_approximator = ConnectedComponentsInstanceApproximator(cca_backend=CCABackend.cc3d) >>> semantic_pair = SemanticPair(…) >>> result = cca_approximator.approximate_instances(semantic_pair)
- _abc_impl = <_abc._abc_data object>¶
- _approximate_instances(semantic_pair: SemanticPair, **kwargs) UnmatchedInstancePair ¶
Approximate instances using the connected components algorithm.
- Parameters:
semantic_pair (SemanticPair) – The semantic pair to be approximated.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance approximation.
- Return type:
- classmethod _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- class panoptica.instance_approximator.InstanceApproximator¶
Bases:
SupportsConfig
Abstract base class for instance approximation algorithms in panoptic segmentation evaluation.
- None¶
- _approximate_instances(self, semantic_pair
SemanticPair, **kwargs) -> UnmatchedInstancePair | MatchedInstancePair: Abstract method to be implemented by subclasses for instance approximation.
- approximate_instances(self, semantic_pair
SemanticPair, **kwargs) -> UnmatchedInstancePair | MatchedInstancePair: Perform instance approximation on the given SemanticPair.
- Raises:
AssertionError – If there are negative values in the semantic maps, which is not allowed.
Example: >>> class CustomInstanceApproximator(InstanceApproximator): … def _approximate_instances(self, semantic_pair: SemanticPair, **kwargs) -> UnmatchedInstancePair | MatchedInstancePair: … # Implementation of instance approximation algorithm … pass … >>> approximator = CustomInstanceApproximator() >>> semantic_pair = SemanticPair(…) >>> result = approximator.approximate_instances(semantic_pair)
- _abc_impl = <_abc._abc_data object>¶
- abstract _approximate_instances(semantic_pair: SemanticPair, **kwargs) UnmatchedInstancePair | MatchedInstancePair ¶
Abstract method to be implemented by subclasses for instance approximation.
- Parameters:
semantic_pair (SemanticPair) – The semantic pair to be approximated.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance approximation.
- Return type:
- _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- approximate_instances(semantic_pair: SemanticPair, verbose: bool = False, **kwargs) UnmatchedInstancePair | MatchedInstancePair ¶
Perform instance approximation on the given SemanticPair.
- Parameters:
semantic_pair (SemanticPair) – The semantic pair to be approximated.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance approximation.
- Return type:
- Raises:
AssertionError – If there are negative values in the semantic maps, which is not allowed.
panoptica.instance_evaluator module¶
- panoptica.instance_evaluator._evaluate_instance(reference_arr: ndarray, prediction_arr: ndarray, ref_idx: int, eval_metrics: list[Metric]) dict[Metric, float] ¶
Evaluate a single instance.
- Parameters:
ref_labels (np.ndarray) – Reference instance segmentation mask.
pred_labels (np.ndarray) – Predicted instance segmentation mask.
ref_idx (int) – The label of the current instance.
iou_threshold (float) – The IoU threshold for considering a match.
- Returns:
Tuple containing True Positives (int), Dice coefficient (float), and IoU (float).
- Return type:
Tuple[int, float, float]
- panoptica.instance_evaluator.evaluate_matched_instance(matched_instance_pair: MatchedInstancePair, eval_metrics: list[Metric] = [Metric.DSC, Metric.IOU, Metric.ASSD], decision_metric: Metric | None = Metric.IOU, decision_threshold: float | None = None, **kwargs) EvaluateInstancePair ¶
Evaluate a given MatchedInstancePair given metrics and decision threshold.
- Parameters:
processing_pair (MatchedInstancePair) – The matched instance pair containing original labels.
labelmap (Instance_Label_Map) – The instance label map obtained from instance matching.
- Returns:
Evaluated pair of instances
- Return type:
panoptica.instance_matcher module¶
- class panoptica.instance_matcher.DesperateMarriageMatching¶
Bases:
InstanceMatchingAlgorithm
- _abc_impl = <_abc._abc_data object>¶
- class panoptica.instance_matcher.InstanceMatchingAlgorithm¶
Bases:
SupportsConfig
Abstract base class for instance matching algorithms in panoptic segmentation evaluation.
- None¶
- _match_instances(self, unmatched_instance_pair
UnmatchedInstancePair, **kwargs) -> Instance_Label_Map: Abstract method to be implemented by subclasses for instance matching.
- match_instances(self, unmatched_instance_pair
UnmatchedInstancePair, **kwargs) -> MatchedInstancePair: Perform instance matching on the given UnmatchedInstancePair.
Example: >>> class CustomInstanceMatcher(InstanceMatchingAlgorithm): … def _match_instances(self, unmatched_instance_pair: UnmatchedInstancePair, **kwargs) -> Instance_Label_Map: … # Implementation of instance matching algorithm … pass … >>> matcher = CustomInstanceMatcher() >>> unmatched_instance_pair = UnmatchedInstancePair(…) >>> result = matcher.match_instances(unmatched_instance_pair)
- _abc_impl = <_abc._abc_data object>¶
- abstract _match_instances(unmatched_instance_pair: UnmatchedInstancePair, **kwargs) InstanceLabelMap ¶
Abstract method to be implemented by subclasses for instance matching.
- Parameters:
unmatched_instance_pair (UnmatchedInstancePair) – The unmatched instance pair to be matched.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance matching.
- Return type:
Instance_Label_Map
- _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- match_instances(unmatched_instance_pair: UnmatchedInstancePair, **kwargs) MatchedInstancePair ¶
Perform instance matching on the given UnmatchedInstancePair.
- Parameters:
unmatched_instance_pair (UnmatchedInstancePair) – The unmatched instance pair to be matched.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance matching.
- Return type:
- class panoptica.instance_matcher.MatchUntilConvergenceMatching¶
Bases:
InstanceMatchingAlgorithm
- _abc_impl = <_abc._abc_data object>¶
- class panoptica.instance_matcher.MaximizeMergeMatching(matching_metric: Metric = Metric.IOU, matching_threshold: float = 0.5)¶
Bases:
InstanceMatchingAlgorithm
Instance matching algorithm that performs many-to-one matching based on metric. Will merge if combined instance metric is greater than individual one. Only matches if at least a single instance exceeds the threshold
- _match_instances(self, unmatched_instance_pair
UnmatchedInstancePair, **kwargs) -> Instance_Label_Map:
- Raises:
AssertionError – If the specified IoU threshold is not within the valid range.
- _abc_impl = <_abc._abc_data object>¶
- _match_instances(unmatched_instance_pair: UnmatchedInstancePair, **kwargs) InstanceLabelMap ¶
Perform one-to-one instance matching based on IoU values.
- Parameters:
unmatched_instance_pair (UnmatchedInstancePair) – The unmatched instance pair to be matched.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance matching.
- Return type:
Instance_Label_Map
- classmethod _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- new_combination_score(pred_labels: list[int], new_pred_label: int, ref_label: int, unmatched_instance_pair: UnmatchedInstancePair)¶
- class panoptica.instance_matcher.NaiveThresholdMatching(matching_metric: Metric = Metric.IOU, matching_threshold: float = 0.5, allow_many_to_one: bool = False)¶
Bases:
InstanceMatchingAlgorithm
Instance matching algorithm that performs one-to-one matching based on IoU values.
- iou_threshold¶
The IoU threshold for matching instances.
- Type:
float
- __init__(self, iou_threshold
float = 0.5) -> None: Initialize the NaiveOneToOneMatching instance.
- _match_instances(self, unmatched_instance_pair
UnmatchedInstancePair, **kwargs) -> Instance_Label_Map: Perform one-to-one instance matching based on IoU values.
- Raises:
AssertionError – If the specified IoU threshold is not within the valid range.
Example: >>> matcher = NaiveOneToOneMatching(iou_threshold=0.6) >>> unmatched_instance_pair = UnmatchedInstancePair(…) >>> result = matcher.match_instances(unmatched_instance_pair)
- _abc_impl = <_abc._abc_data object>¶
- _match_instances(unmatched_instance_pair: UnmatchedInstancePair, **kwargs) InstanceLabelMap ¶
Perform one-to-one instance matching based on IoU values.
- Parameters:
unmatched_instance_pair (UnmatchedInstancePair) – The unmatched instance pair to be matched.
**kwargs – Additional keyword arguments.
- Returns:
The result of the instance matching.
- Return type:
Instance_Label_Map
- classmethod _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- panoptica.instance_matcher.map_instance_labels(processing_pair: UnmatchedInstancePair, labelmap: InstanceLabelMap) MatchedInstancePair ¶
Map instance labels based on the provided labelmap and create a MatchedInstancePair.
- Parameters:
processing_pair (UnmatchedInstancePair) – The unmatched instance pair containing original labels.
labelmap (Instance_Label_Map) – The instance label map obtained from instance matching.
- Returns:
The result of mapping instance labels.
- Return type:
Example: >>> unmatched_instance_pair = UnmatchedInstancePair(…) >>> labelmap = [([1, 2], [3, 4]), ([5], [6])] >>> result = map_instance_labels(unmatched_instance_pair, labelmap)
panoptica.panoptica_aggregator module¶
- class panoptica.panoptica_aggregator.Panoptica_Aggregator(panoptica_evaluator: Panoptica_Evaluator, output_file: Path | str, log_times: bool = False, continue_file: bool = True)¶
Bases:
object
Aggregator that manages evaluations and saves resulting metrics per sample.
This class interfaces with the Panoptica_Evaluator to perform evaluations, store results, and manage file outputs for statistical analysis.
- __exist_handler()¶
Handles cleanup upon program exit by removing the temporary output buffer file.
- _save_one_subject(subject_name, result_grouped)¶
Saves the evaluation results for a single subject.
- Parameters:
subject_name (str) – The name of the subject whose results are being saved.
result_grouped (dict) – A dictionary of grouped results from the evaluation.
- evaluate(prediction_arr: ndarray, reference_arr: ndarray, subject_name: str)¶
Evaluates a single case using the provided prediction and reference arrays.
- Parameters:
prediction_arr (np.ndarray) – The array containing the predicted segmentation.
reference_arr (np.ndarray) – The array containing the ground truth segmentation.
subject_name (str) – A unique name for the sample being evaluated. If none is provided, a name will be generated based on the count.
- Raises:
ValueError – If the subject name has already been evaluated or is in process.
- make_statistic() Panoptica_Statistic ¶
Generates statistics from the aggregated evaluation results.
- Returns:
The statistics object containing the results.
- Return type:
- property panoptica_evaluator¶
- panoptica.panoptica_aggregator._load_first_column_entries(file: str | Path)¶
Loads the entries from the first column of a TSV file.
NOT THREAD SAFE BY ITSELF!
- Parameters:
file (str | Path) – The path to the file from which to load entries.
- Returns:
A list of entries from the first column of the file.
- Return type:
list
- Raises:
AssertionError – If the file contains duplicate entries.
- panoptica.panoptica_aggregator._read_first_row(file: str | Path)¶
Reads the first row of a TSV file.
NOT THREAD SAFE BY ITSELF!
- Parameters:
file (str | Path) – The path to the file from which to read the first row.
- Returns:
The first row of the file as a list of strings.
- Return type:
list
- panoptica.panoptica_aggregator._write_content(file: str | Path, content: list[list[str]])¶
Writes content to a TSV file.
- Parameters:
file (str | Path) – The path to the file where content will be written.
content (list[list[str]]) – A list of lists containing the rows of data to write.
panoptica.panoptica_evaluator module¶
- class panoptica.panoptica_evaluator.Panoptica_Evaluator(expected_input: InputType = InputType.MATCHED_INSTANCE, instance_approximator: InstanceApproximator | None = None, instance_matcher: InstanceMatchingAlgorithm | None = None, edge_case_handler: EdgeCaseHandler | None = None, segmentation_class_groups: SegmentationClassGroups | None = None, instance_metrics: list[Metric] = [Metric.DSC, Metric.IOU, Metric.ASSD, Metric.RVD], global_metrics: list[Metric] = [Metric.DSC], decision_metric: Metric | None = None, decision_threshold: float | None = None, save_group_times: bool = False, log_times: bool = False, verbose: bool = False)¶
Bases:
SupportsConfig
- _evaluate_group(group_name: str, label_group: LabelGroup, processing_pair, result_all: bool = True, verbose: bool | None = None, log_times: bool | None = None, save_group_times: bool = False)¶
- _set_instance_approximator(instance_approximator: InstanceApproximator)¶
- _set_instance_matcher(matcher: InstanceMatchingAlgorithm)¶
- classmethod _yaml_repr(node) dict ¶
Abstract method for representing the class in YAML.
- Parameters:
node – The object instance to represent in YAML.
- Returns:
A dictionary representation of the class.
- Return type:
dict
- evaluate(**kwargs)¶
- property resulting_metric_keys: list[str]¶
- property segmentation_class_groups_names: list[str]¶
- set_log_group_times(should_save: bool)¶
- panoptica.panoptica_evaluator._handle_zero_instances_cases(processing_pair: UnmatchedInstancePair | MatchedInstancePair, edge_case_handler: EdgeCaseHandler, global_metrics: list[Metric], eval_metrics: list[Metric] = [Metric.DSC, Metric.IOU, Metric.ASSD]) UnmatchedInstancePair | MatchedInstancePair | PanopticaResult ¶
Handle edge cases when comparing reference and prediction masks.
- Parameters:
num_ref_instances (int) – Number of instances in the reference mask.
num_pred_instances (int) – Number of instances in the prediction mask.
- Returns:
Result object with evaluation metrics.
- Return type:
- panoptica.panoptica_evaluator.panoptic_evaluate(input_pair: SemanticPair | UnmatchedInstancePair | MatchedInstancePair, instance_approximator: InstanceApproximator | None = None, instance_matcher: InstanceMatchingAlgorithm | None = None, instance_metrics: list[Metric] = [Metric.DSC, Metric.IOU, Metric.ASSD], global_metrics: list[Metric] = [Metric.DSC], decision_metric: Metric | None = None, decision_threshold: float | None = None, edge_case_handler: EdgeCaseHandler | None = None, log_times: bool = False, result_all: bool = True, verbose=False, verbose_calc=False, **kwargs) tuple[PanopticaResult, IntermediateStepsData] ¶
Perform panoptic evaluation on the given processing pair.
- Parameters:
processing_pair (SemanticPair | UnmatchedInstancePair | MatchedInstancePair | PanopticaResult) – The processing pair to be evaluated.
instance_approximator (InstanceApproximator | None, optional) – The instance approximator used for approximating instances in the SemanticPair.
instance_matcher (InstanceMatchingAlgorithm | None, optional) – The instance matcher used for matching instances in the UnmatchedInstancePair.
iou_threshold (float, optional) – The IoU threshold for evaluating matched instances. Defaults to 0.5.
**kwargs – Additional keyword arguments.
- Returns:
A tuple containing the panoptic result and a dictionary of debug data.
- Return type:
tuple[PanopticaResult, dict[str, _ProcessingPair]]
- Raises:
AssertionError – If the input processing pair does not match the expected types.
RuntimeError – If the end of the panoptic pipeline is reached without producing results.
Example: >>> panoptic_evaluate(SemanticPair(…), instance_approximator=InstanceApproximator(), iou_threshold=0.6) (PanopticaResult(…), {‘UnmatchedInstanceMap’: _ProcessingPair(…), ‘MatchedInstanceMap’: _ProcessingPair(…)})
panoptica.panoptica_result module¶
- class panoptica.panoptica_result.PanopticaResult(reference_arr: ndarray, prediction_arr: ndarray, num_pred_instances: int, num_ref_instances: int, tp: int, list_metrics: dict[Metric, list[float]], edge_case_handler: EdgeCaseHandler, global_metrics: list[Metric] = [], computation_time: float | None = None)¶
Bases:
object
- _add_metric(name_id: str, metric_type: MetricType, calc_func: Callable | None, long_name: str | None = None, default_value=None, was_calculated: bool = False)¶
Adds a new metric to the evaluation metrics.
- Parameters:
name_id (str) – The unique identifier for the metric.
metric_type (MetricType) – The type of the metric.
calc_func (Callable | None) – The function to calculate the metric.
long_name (str | None) – A longer, descriptive name for the metric.
default_value – The default value for the metric.
was_calculated (bool) – Indicates if the metric has been calculated.
- Returns:
The default value of the metric.
- _calc(k, v)¶
Attempts to get the value of a metric and captures any exceptions.
- Parameters:
k – The metric key.
v – The metric value.
- Returns:
A tuple indicating success or failure and the corresponding value or exception.
- _calc_global_bin_metric(metric: Metric, prediction_arr, reference_arr, do_binarize: bool = True)¶
Calculates a global binary metric based on predictions and references.
- Parameters:
metric (Metric) – The metric to compute.
prediction_arr – The predicted values.
reference_arr – The ground truth values.
do_binarize (bool) – Whether to binarize the input arrays. Defaults to True.
- Returns:
The calculated metric value.
- Raises:
MetricCouldNotBeComputedException – If the specified metric is not set.
- _calc_metric(metric_name: str, supress_error: bool = False)¶
Calculates a specific metric by its name.
- Parameters:
metric_name (str) – The name of the metric to calculate.
supress_error (bool) – If true, suppresses errors during calculation.
- Returns:
The calculated metric value or raises an exception if it cannot be computed.
- Raises:
MetricCouldNotBeComputedException – If the metric cannot be found.
- calculate_all(print_errors: bool = False)¶
Calculates all possible metrics that can be derived.
- Parameters:
print_errors (bool, optional) – If true, will print every metric that could not be computed and its reason. Defaults to False.
- property evaluation_metrics¶
- get_list_metric(metric: Metric, mode: MetricMode)¶
Retrieves a list of metrics based on the given metric type and mode.
- Parameters:
metric (Metric) – The metric to retrieve.
mode (MetricMode) – The mode of the metric.
- Returns:
The corresponding list of metrics.
- Raises:
MetricCouldNotBeComputedException – If the metric cannot be found.
- to_dict() dict ¶
Converts the metrics to a dictionary format.
- Returns:
A dictionary containing metric names and their values.
- panoptica.panoptica_result.fn(res: PanopticaResult)¶
- panoptica.panoptica_result.fp(res: PanopticaResult)¶
- panoptica.panoptica_result.pq(res: PanopticaResult)¶
- panoptica.panoptica_result.pq_cldsc(res: PanopticaResult)¶
- panoptica.panoptica_result.pq_dsc(res: PanopticaResult)¶
- panoptica.panoptica_result.prec(res: PanopticaResult)¶
- panoptica.panoptica_result.rec(res: PanopticaResult)¶
- panoptica.panoptica_result.rq(res: PanopticaResult)¶
Calculate the Recognition Quality (RQ) based on TP, FP, and FN.
- Returns:
Recognition Quality (RQ).
- Return type:
float
- panoptica.panoptica_result.sq(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_assd(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_assd_std(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_cldsc(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_cldsc_std(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_dsc(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_dsc_std(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_rvd(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_rvd_std(res: PanopticaResult)¶
- panoptica.panoptica_result.sq_std(res: PanopticaResult)¶
panoptica.panoptica_statistics module¶
- class panoptica.panoptica_statistics.Panoptica_Statistic(subj_names: list[str], value_dict: dict[str, dict[str, list[float]]])¶
Bases:
object
- _assertgroup(group)¶
- _assertmetric(metric)¶
- _assertsubject(subjectname)¶
- classmethod from_file(file: str)¶
- get(group, metric, remove_nones: bool = False) list[float] ¶
Returns the list of values for given group and metric
- Parameters:
group (_type_) – _description_
metric (_type_) – _description_
- Returns:
_description_
- Return type:
list[float]
- get_across_groups(metric) list[float] ¶
Given metric, gives list of all values (even across groups!) Treat with care!
- Parameters:
metric (_type_) – _description_
- Returns:
_description_
- Return type:
_type_
- get_one_subject(subjectname: str)¶
Gets the values for ONE subject for each group and metric
- Parameters:
subjectname (str) – _description_
- Returns:
_description_
- Return type:
_type_
- get_summary(group, metric) ValueSummary ¶
- get_summary_across_groups() dict[str, ValueSummary] ¶
Calculates the average and std over all groups (so group-wise avg first, then average over those)
- Returns:
_description_
- Return type:
dict[str, tuple[float, float]]
- get_summary_dict(include_across_group: bool = True) dict[str, dict[str, ValueSummary]] ¶
- get_summary_figure(metric: str, manual_metric_range: None | tuple[float, float] = None, name_method: str = 'Structure', horizontal: bool = True, sort: bool = True, title: str = '')¶
Returns a figure object that shows the given metric for each group and its std
- Parameters:
metric (str) – _description_
horizontal (bool, optional) – _description_. Defaults to True.
- Returns:
_description_
- Return type:
_type_
- property groupnames¶
- property metricnames¶
- print_summary(ndigits: int = 3, only_across_groups: bool = True)¶
- property subjectnames¶
- class panoptica.panoptica_statistics.ValueSummary(value_list: list[float])¶
Bases:
object
- property avg: float¶
- property max: float¶
- property min: float¶
- property std: float¶
- property values: list[float]¶
- panoptica.panoptica_statistics._flatten_extend(matrix)¶