Skip to content

fastai integration guide#

Open in Colab

Custom dashboard displaying metadata logged with fastai

fastai is a deep learning library. With the Neptune-fastai integration, the following metadata is logged automatically:

  • Hyperparameters
  • Losses and metrics
  • Training code (Python scripts or Jupyter notebooks)
  • Git information
  • Dataset version
  • Model configuration, architecture, and weights

See in Neptune  Code examples 

Before you start#

Installing the integration#

To use your preinstalled version of Neptune together with the integration:

pip
pip install -U neptune-fastai
conda
conda install -c conda-forge neptune-fastai

To install both Neptune and the integration:

pip
pip install -U "neptune[fastai]"
conda
conda install -c conda-forge neptune neptune-fastai
Passing your Neptune credentials

Once you've registered and created a project, set your Neptune API token and full project name to the NEPTUNE_API_TOKEN and NEPTUNE_PROJECT environment variables, respectively.

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...6Lc"

To find your API token: In the bottom-left corner of the Neptune app, expand the user menu and select Get my API token.

export NEPTUNE_PROJECT="ml-team/classification"

Your full project name has the form workspace-name/project-name. You can copy it from the project settings: Click the menu in the top-right → Edit project details.

On Windows, navigate to SettingsEdit the system environment variables, or enter the following in Command Prompt: setx SOME_NEPTUNE_VARIABLE 'some-value'


While it's not recommended especially for the API token, you can also pass your credentials in the code when initializing Neptune.

run = neptune.init_run(
    project="ml-team/classification",  # your full project name here
    api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh...3Kb8",  # your API token here
)

For more help, see Set Neptune credentials.

If you'd rather follow the guide without any setup, you can run the example in Colab .

fastai logging example#

This example shows how to use NeptuneCallback to log metadata as you train your model with fastai.

For how to customize the NeptuneCallback, see the More options section.

  1. Start a run:

    import neptune
    
    run = neptune.init_run() # (1)!
    
    1. If you haven't set up your credentials, you can log anonymously:

      neptune.init_run(
          api_token=neptune.ANONYMOUS_API_TOKEN,
          project="common/fastai-integration",
      )
      
  2. Initialize the Neptune callback:

    from neptune.integrations.fastai import NeptuneCallback
    
    neptune_callback = NeptuneCallback(run=run)
    
  3. To log metadata, pass the callback to the callback argument of the learner() or fit() method:

    learn = learner(...)
    learn.fit(..., cbs=neptune_callback)
    
    learn = cnn_learner(..., cbs=neptune_callback)
    learn.fit(...)
    
  4. To stop the connection to Neptune and sync all data, call the stop() method:

    run.stop()
    
  5. Run your script as you normally would.

    To open the run and watch your model training live, click the Neptune link that appears in the console output.

    Example link: https://app.neptune.ai/common/fastai-integration/e/FAS-1895

See example in Neptune 

More options#

Customizing the callback#

You can change or extend the default behavior of NeptuneCallback() by passing the callback a function to the constructor.

Example
def cb(self):
    self.run["sys/name"] = "Binary Classification"
    self.run["seed"] = 1000

neptune_cb = NeptuneCallback(before_fit=cb)

Info

Specify the event to change. To learn more about events, see the fastai documentation .

Logging model architecture and weights#

To log your model weight files during single training or all training phases, add SaveModelCallback() to the callbacks list of your learner() or fit() method.

from fastai.callback.all import SaveModelCallback

Log every n epochs:

n = 4
learn = learner(
    ...,
    cbs=[
        SaveModelCallback(every_epoch=n),
        NeptuneCallback(run=run, upload_saved_models="all"),
    ],
)

Best model:

learn = learner(
    ...,
    cbs=[SaveModelCallback(), NeptuneCallback(run=run)],
)

Logging images and predictions#

In computer vision tasks, plotting images and predictions allows you to visually inspect the model's predictions.

You can log torch tensors and display them as images in the Neptune app:

from neptune.types import File

# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}

run["torch_tensor"].upload(File.as_image(img), description=description)

Logging after fitting or testing is finished#

You can use the created Neptune callback outside of the learner context, which lets you log metadata after the fitting or testing methods are finished.

Example
import torch
from fastai.callback.all import SaveModelCallback
from fastai.vision.all import (
    ImageDataLoaders,
    URLs,
    accuracy,
    resnet18,
    untar_data,
    vision_learner,
)
import neptune
from neptune.integrations.fastai import NeptuneCallback

# Create Neptune run
run = neptune.init_run()

# Create Neptune callback
neptune_callback = NeptuneCallback(run=run)

learn = vision_learner(
    dls,
    resnet18,
    metrics=accuracy,
    cbs=[SaveModelCallback(), neptune_callback],
)

# Run fit and test
learn.fit_one_cycle(1)

Log additional metadata after fit and test:

# Log images
batch = dls.one_batch()
for i, (x, y) in enumerate(dls.decode_batch(batch)): # (1)!
    run["images/one_batch"].append(
        File.as_image(x.as_subclass(torch.Tensor).permute(2, 1, 0)),
        name=f"{i}",
        description=f"Label: {y}",
    )
  1. Neptune supports torch tensors. fastai uses their own tensor type name TensorImage, so you have to convert it back to torch.Tensor.

Generic recipe for logging additional metadata:

metadata = ...
run["your/metadata/structure"] = metadata

Pickling the learner#

If the Learner object includes the Neptune callback, pickling the entire learner won't work, as pickle can't access the local attributes of the Neptune client library through the NeptuneCallback instance.

To pickle the learner:

  1. Remove the NeptuneCallback with the remove_cb() method (to avoid errors due to pickle's inability to pickle local objects, such as nested functions or methods).
  2. Pickle the learner using the export() method.
  3. Add the callback back in with the add_cb() method (for example, to log more training loops to the same run).
Example
run = neptune.init_run()
...

pickled_learner = "learner.pkl"
neptune_cbk = NeptuneCallback(run=run, base_namespace="experiment")
learn = vision_learner(
    dls,
    resnet18,
    metrics=accuracy,
    cbs=[neptune_cbk],
)

learn.fit_one_cycle(1)  # training
learn.remove_cb(neptune_cbk)  # remove NeptuneCallback
learn.export(f"/content/test_model.pkl")  # export learner
run[f"{base_namespace}/pickled_learner"].upload(pickled_learner)  # upload learner to Neptune
learn.add_cb(neptune_cbk)  # add NeptuneCallback back again
learn.fit_one_cycle(1)  # continue training

Related