Contributing guide#

Scanpy provides extensive developer documentation, most of which applies to this project, too. This document will not reproduce the entire content from there. Instead, it aims at summarizing the most important information to get you started on contributing.

We assume that you are already familiar with git and with making pull requests on GitHub. If not, please refer to the scanpy developer guide.

Installing dev dependencies#

In addition to the packages needed to use this package, you need additional python packages to run tests and build the documentation.

We strongly recommend using Hatch as a project manager, which will manage separate virtual environments for development, testing, and documentation to avoid dependency conflicts.

With uv#

Alternatively, you can use uv to set up a single environment:

uv sync --all-extras

With pip#

cd CREsted
pip install -e ".[dev,test,doc]"

Code-style#

This package uses pre-commit to enforce consistent code-styles. On every commit, pre-commit checks will either automatically fix issues with the code, or raise an error message.

To enable pre-commit locally, simply run

pre-commit install

in the root of the repository. Pre-commit will automatically download all dependencies when it is run for the first time.

Alternatively, you can rely on the pre-commit.ci service enabled on GitHub. If you didn’t run pre-commit before pushing changes to GitHub it will automatically commit fixes to your pull request, or show an error message.

If pre-commit.ci added a commit on a branch you still have been working on locally, simply use

git pull --rebase

to integrate the changes into yours. While the pre-commit.ci is useful, we strongly encourage installing and running pre-commit locally first to understand its usage.

Finally, most editors have an autoformat on save feature. Consider enabling this option for ruff and biome.

Writing tests#

This package uses the pytest for automated testing. Please write tests for every function added to the package.

Most IDEs integrate with pytest and provide a GUI to run tests. Alternatively, you can run all tests from the command line by executing

With Hatch (Recommended)#

CREsted supports both TensorFlow and PyTorch backends. The test environment matrix tests across Python versions (3.11, 3.12, 3.13) and backends (tensorflow, pytorch):

hatch test                                      # Quick test (auto-selects compatible environment)
hatch test --all                                # Run tests on all Python versions and backends
hatch test -i backend=tensorflow                # Test only with TensorFlow backend
hatch test -i backend=pytorch                   # Test only with PyTorch backend
hatch run hatch-test.py3.11-tensorflow:run      # Specific Python + backend combination

With uv#

First install a backend, then run tests:

uv pip install --system tensorflow  # or torch
uv run pytest

With pip#

First install a backend, then run tests:

pip install tensorflow  # or torch
pytest

Continuous integration#

Continuous integration will automatically run the tests on all pull requests and test against the minimum and maximum supported Python version.

Publishing a release#

Updating the version number#

Before making a release, you need to update the version number in the pyproject.toml file. Please adhere to Semantic Versioning, in brief

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,

  2. MINOR version when you add functionality in a backwards compatible manner, and

  3. PATCH version when you make backwards compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

Once you are done, commit and push your changes and navigate to the “Releases” page of this project on GitHub. Specify vX.X.X as a tag name and create a release. For more information, see managing GitHub releases. This will automatically create a git tag and trigger a Github workflow that creates a release on PyPI.

Writing documentation#

Please write documentation for new or changed features and use-cases. This project uses sphinx with the following features:

See the scanpy developer docs for more information on how to write documentation.

Including new functions in the documentation#

All functions’/objects’ documentation is automatically generated by [sphinx.ext.autodoc](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) and [sphinx.ext.autosummary](https://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html) from the docstring. We automatically generate the docs for everything in pp, pl, tl and utils recursively; that means that just writing the docstring is enough to have it show up on the readthedocs. However, top-level functions, like crested.import_bigwigs, do need to be listed in autosummary lists manually; see docs/api/datasets.md for an example. (Sub)modules are documented by their docstring in __init__.py.

These recursive pages are created using the docs/_templates/custom-module-template.rst, based on Jinja. See the Jinja template tutorial for an introduction. The template itself is derived from here/here, but with changes for our package. The most notable is that we strip crested. from all titles for brevity and hard-code certain prefixes for submodules, to e.g. show ‘Preprocessing: pp’ rather than just ‘pp’.

Tutorials with myst-nb and jupyter notebooks#

The documentation is set-up to render jupyter notebooks stored in the docs/notebooks directory using myst-nb. Currently, only notebooks in .ipynb format are supported that will be included with both their input and output cells. It is your responsibility to update and re-run the notebook whenever necessary.

If you are interested in automatically running notebooks as part of the continuous integration, please check out this feature request in the cookiecutter-scverse repository.

Hints#

  • If you refer to objects from other packages, please add an entry to intersphinx_mapping in docs/conf.py. Only if you do so can sphinx automatically create a link to the external documentation.

  • If building the documentation fails because of a missing link that is outside your control, you can add an entry to the nitpick_ignore list in docs/conf.py

Building the docs locally#

With Hatch#

hatch run docs:build
open docs/_build/html/index.html

With uv or pip#

cd docs
make html
open _build/html/index.html