Use case tutorials#
The following tutorials describe some more specific use cases that you may find helpful.
-
 Tracking experiments
- Track and organize model-training runs – in this tutorial, we train a classifier model and explore the central experiment-tracking features in Neptune.
- Track distributed training jobs – learn how to track metadata from single-node, multi-node, or multi-GPU jobs.
- Log in a sequential pipeline – we demonstrate how to access the same run from multiple scripts and organize the metadata per pipeline step.
- Track and visualizing cross-validation results – see how to use Neptune namespaces to organize cross-validation metadata.
- Track HPO jobs – use Neptune to track metadata using either a single run or separate runs for each trial.
- Monitor model training live – how to use Neptune to monitor metrics during training.
-
 Reproducibility
- Reproduce a run – you can reproduce any run by retrieving its metadata and logging it into a newly created run.
- Restart a run from a checkpoint – save checkpoints and, when necessary, resume the run from them.
- Re-run a failed training – model training runs don't always go as planned. Learn how to fetch the parameters and metadata of a failed Neptune run and use them for a new run.
-
 Data versioning
These tutorials walk you through artifact version tracking and comparison.
The related Colab notebooks and all necessary scripts are located in our GitHub examples repository.