crested.tl.evaluate

Contents

crested.tl.evaluate#

crested.tl.evaluate(adata, model, metrics=None, split='test', return_metrics=False, **kwargs)#

Calculate metrics on the test set.

If a list of models is provided, the predictions will be averaged across all models.

Parameters:
  • adata (AnnData) – The AnnData to retrieve ground truth and region info from. Must have ‘test’ in adata.var['split'].

  • model (Model | list[Model] | str) – A (list of) trained keras model(s) to make predictions with, or a name of a saved prediction layer.

  • metrics (TaskConfig | list[Metric | Loss] | None (default: None)) – A {func}`~crested.tl.TaskConfig` object, a list of keras metrics and/or losses, or None (in which case it will try to use the metrics compiled with the model).

  • split (str | None (default: 'test')) – Which split to evaluate. Must be one of the values encoded in adata.var[‘split’], or None to evaluate the entire dataset.

  • return_metrics (bool (default: False)) – Whether to return a dict of the results.

  • kwargs – Arguments passed on to {func}`~crested.tl.predict`, like batch_size or genome.

Returns:

If return_metrics=True, a dict with of shape {metric_name: metric_value, …}.

Example

>>> crested.tl.test(adata, model, config)