Note Neptune integrates with both pure PyTorch and many libraries from the PyTorch Ecosystem. You may want to check out the following integrations:

For pure PyTorch integration, read on.

What will you learn?

You will learn how to use Neptune + PyTorch to help you keep track of your model training metadata.

With Neptune + PyTorch you can:

  • Log Model Configuration

  • Log hyperparameters

  • Log loss & metrics

  • Log training code and git information

  • Log images and the predictions

  • Log artifacts(i.e. model weights, dataset version)

  • Log 2-D/3-D tensors as images or 1-D tensors as metrics.

You can log other metadata types like interactive charts, video, audio, and more. See What can you log and display?


In this guide I will show you how to:

  • Install Neptune

  • Connect Neptune to your PyTorch model training code and create a Run

  • Log the training loss and metrics

  • Upload training scripts and .git info to Neptune

  • Explore metadata that you logged on the Neptune UI

To see the full code example click on the links above, this guide only shows mainly Neptune-specific code snippets.

Step 0: Before you start

You need to have Python 3.6+ and the following libraries installed:

  • neptune-client

  • torch

  • torchvision

pip install neptune-client torch torchvision

The code examples were tested using:

neptune-client==0.9.16 numpy=1.19.5 torch==1.8.1 torchvision==0.10.0

You need minimal familiarity with PyTorch. Go through the PyTorch tutorial to get started.

Step 1: Initialize a Neptune Run

Place this code snippet at the beginning of your script or notebook cell

import as neptune
run = neptune.init(project = '<YOUR_WORKSPACE/YOUR_PROJECT>',
api_token = '<YOURR_API_TOKEN>',

This opens a new Run in Neptune that allows you to log various objects.

You need to authenticate yourself and open an existing project. Here is how:

You can use the api_token='ANONYMOUS' and project='common/pytorch-integration' to explore without having to create a Neptune account.


You can use namespace to helps you organize all your model training metadata into folders. For more see Namespace handler.

Step 2: Log config & hyperparameters

You can assign your configuration variables and hyperparameters to Fields of a Run using "=". There are a few benefits from doing this, precisely:

  • Makes it easier to read your code

  • And distinguish single value or Atom Fields(i.e. hyperparameters) from a multi value or Series Fields (i.e. losses and metrics) which you see next.

For more see the Parameters and configurations or Field types page.

run['config/dataset/path'] = data_dir
run['config/dataset/transforms'] = data_tfms # dict() object
run['config/dataset/size'] = dataset_size # dict() object
run['config/model'] = type(model).__name__
run['config/criterion'] = type(criterion).__name__
run['config/optimizer'] = type(optimizer).__name__
run['config/params'] = hparams # dict() object

Here is how it will look like in the UI.

Step 3: Log losses and metrics

To log metrics and losses use .log() method in your training loop:

for epoch in range(epochs):
for i, (x, y) in enumerate(trainloader, 0):
# Log batch loss
# Log batch accuracy

Step 4: Stop logging to the Neptune Run

Once you are done logging, you should stop tracking the run using the stop() method. This is needed only while logging from a notebook environment. While logging through a script, Neptune automatically stops tracking once the script has completed execution.


Step 5: Run your training script or notebook cell and monitor your training in Neptune UI

Run your script or notebook cell as you normally would.

After running your script or notebook cell you will get a link similar to: with common/pytorch-integration replaced by your project, and PYROR-66 replaced by your run.

Click on the link to open the Run in Neptune to watch your model training live.

Initially, it may be empty but keep the tab with the Run open to see your experiment metadata update in real-time.

More Options

Log model Architecture & Weights

It's always helpful to you or a team member to store the model arch and best weight file. Here are a few reasons why:

  • Allows for a quick understanding of the model structure

  • Enables reproducibility

  • And helps in testing against another model, dataset, or moving it to production.


Log Images and Predictions

Plotting images and predictions is always helpful in computer vision tasks because it allows you to visually inspect the model's predictions.

With Neptune + PyTorch you can log torch tensors and they will be displayed as images in the Neptune UI.

from import File
# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}
description = description)

To learn what ML metadata types are supported, see What you can log and display.

How to ask for help?

Please visit the Getting help page. Everything regarding support is there.

Other pages you may like

You may want to check out the following integrations from the Pytorch ecosystem: