fastai is a deep learning library that provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains and provides researchers with low-level components that can be mixed and matched to build new approaches.

What you will get with this integration?

With Neptune + fastai integration the following metadata is logged automatically for you:
  • Hyper-parameters
  • Losses & metrics
  • Training code (Python scripts or Jupyter notebooks) and Git information
  • Dataset version
  • Model configuration, architecture, and weights
You can also log other metadata types like interactive charts, video, audio, and more. See What can you log and display?


Before you start, make sure that:

Install fastai and neptune-client[fastai]

Depending on your operating system open a terminal or CMD and run this command:
pip install fastai neptune-client[fastai]
For more help, see installing neptune-client.
This integration is tested with fastai==2.3.1and neptune-client[fastai]==0.9.18.


Step 1: Initialize a Neptune Run

Place this code snippet at the beginning of your script or notebook cell
import as neptune
run = neptune.init(project = "WORKSPACE_NAME/PROJECT_NAME",
api_token = "NEPTUNE_API_TOKEN",
source_files = ["*.py"],
This creates a new run in Neptune that allows you to log various objects.
To explore without creating a Neptune account, you can use api_token='ANONYMOUS' and project='common/fastai-integration'.

Step 2: Add NeptuneCallback to learner or fit method:

To log metadata, pass NeptuneCallback() to the callbacks argument of a learner or fit method.

Log a single training phase

learn = learner(...), cbs = NeptuneCallback(run=run))
Log all training phases of the learner
learn = cnn_learner(..., cbs=NeptuneCallback(run=run))

Step 3: Stop the run

Once you are done logging, stop run using the stop() method. This is needed only while logging from a notebook environment. While logging through a script, Neptune automatically stops tracking once the script has completed execution.

Step 4: Run your training script or notebook cell and monitor your training in Neptune

Run your script or notebook cell as you normally would.
After running your script or notebook cell, you will get a link similar to: with common/fastai-integration replaced by your project name, and FAS-61 replaced by your run id.
Click on the link to open the run in Neptune and watch your model training live.
Initially, it may be empty but keep the tab with the run open to see your experiment metadata update in real time.
fastai dashboard in Neptune

More options

Customizing NeptuneCallback

You can change or extend the default behavior of NeptuneCallback() by passing the callback a function to the constructor. For example:
def cb(self):['sys/name'] = 'Binary Classification'['seed'] = 1000
neptune_cb = NeptuneCallback(before_fit=cb)
Remember to specify the event you want to change. To learn more about the different events, see the fastai Callback core page.

Log model architecture and weights

  • Add SaveModelCallback()
To log your model weight files during single training or all training phases, add SavemodelCallback() to the callbacks' list of your learner or fit method.
  • Log every n epochs:
n = 4
learn = learner(..., cbs=[
NeptuneCallback(run=run, upload_saved_models='all')])
  • Best model:
learn = learner(..., cbs=[
SaveModelCallback(), NeptuneCallback(run=run)])

Log images and predictions

In computer vision tasks, plotting images and predictions allows you to visually inspect the model's predictions.
With Neptune +, you can log torch tensors and they will be displayed as images in the Neptune app.
from import File
# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}
description = description)

How to ask for help

Visit the Getting help page.

Related resources

You may want to check out other integrations from the PyTorch ecosystem:
Last modified 8d ago