Skip to content

Working with fastai#

Open in Colab

Custom dashboard displaying metadata logged with fastai

fastai is a deep learning library.

With the Neptune–fastai integration, the following metadata is logged automatically:

  • Hyperparameters
  • Losses and metrics
  • Training code (Python scripts or Jupyter notebooks)
  • Git information
  • Dataset version
  • Model configuration, architecture, and weights

See in Neptune  Code examples 

Related

Before you start#

Tip

If you'd rather follow the guide without any setup, you can run the example in Colab .

Installing the Neptune–fastai integration#

On the command line or in a terminal app, such as Command Prompt, enter the following:

pip install neptune-client[fastai]

fastai logging example#

This example shows how to use NeptuneCallback to log metadata as you train your model with fastai.

For how to customize the NeptuneCallback, see the More options section.

  1. Start a run:

    import neptune.new as neptune
    
    run = neptune.init_run()  # (1)
    
    1. If you haven't set up your credentials, you can log anonymously: neptune.init_run(api_token=neptune.ANONYMOUS_API_TOKEN, project="common/fastai-integration")
  2. Initialize the Neptune callback:

    from neptune.new.integrations.fastai import NeptuneCallback
    
    neptune_callback = NeptuneCallback(run=run)
    
  3. To log metadata, pass the callback to the callbacks argument of a learner() or fit() method:

    learn = learner(...)
    learn.fit(..., cbs=neptune_callback)
    
    learn = cnn_learner(..., cbs=neptune_callback)
    learn.fit(...)
    learn.fit(...)
    
  4. Run your script as you normally would.

    To open the run and watch your model training live, click the Neptune link that appears in the console output.

    Example link: https://app.neptune.ai/common/fastai-integration/e/FAS-61

Stop the run when done

Once you are done logging, you should stop the Neptune run. You need to do this manually when logging from a Jupyter notebook or other interactive environment:

run.stop()

If you're running a script, the connection is stopped automatically when the script finishes executing. In notebooks, however, the connection to Neptune is not stopped when the cell has finished executing, but rather when the entire notebook stops.

See example in Neptune 

More options#

Customizing the callback#

You can change or extend the default behavior of NeptuneCallback() by passing the callback a function to the constructor.

Example
def cb(self):
    self.run["sys/name"] = "Binary Classification"
    self.run["seed"] = 1000

neptune_cb = NeptuneCallback(before_fit=cb)

Info

Specify the event to change. To learn more about events, see the fastai documentation .

Logging model architecture and weights#

To log your model weight files during single training or all training phases, add SaveModelCallback() to the callbacks list of your learner() or fit() method.

from fastai.callback.all import SaveModelCallback

Log every \(n\) epochs:

n = 4
learn = learner(
    ...,
    cbs=[
        SaveModelCallback(every_epoch=n),
        NeptuneCallback(run=run, upload_saved_models="all"),
    ],
)

Best model:

learn = learner(
    ...,
    cbs=[SaveModelCallback(), NeptuneCallback(run=run)],
)

Logging images and predictions#

In computer vision tasks, plotting images and predictions allows you to visually inspect the model's predictions.

You can log torch tensors and display them as images in the Neptune app:

from neptune.new.types import File

# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}

run["torch_tensor"].upload(File.as_image(img), description=description)

Manually logging metadata#

If you have other types of metadata that are not covered in this guide, you can still log them using the Neptune client library (neptune-client).

When you initialize the run, you get a run object, to which you can assign different types of metadata in a structure of your own choosing.

from neptune.new import neptune

# Create a new Neptune run
run = neptune.init_run()

# Log metrics or other values inside loops
for epoch in range(n_epochs):
    ...  # Your training loop

    run["train/epoch/loss"].log(loss)  # Each log() appends a value
    run["train/epoch/accuracy"].log(acc)

# Upload files
run["test/preds"].upload("path/to/test_preds.csv")

# Track and version artifacts
run["train/images"].track_files("./datasets/images")

# Record numbers or text
run["tokenizer"] = "regexp_tokenize"