Working with fastai#
fastai is a deep learning library.
With the Neptune–fastai integration, the following metadata is logged automatically:
- Hyperparameters
- Losses and metrics
- Training code (Python scripts or Jupyter notebooks)
- Git information
- Dataset version
- Model configuration, architecture, and weights
See in Neptune  Code examples 
Before you start#
Tip
If you'd rather follow the guide without any setup, you can run the example in Colab .
- Set up Neptune. Instructions:
Installing the Neptune–fastai integration#
On the command line or in a terminal app, such as Command Prompt, enter the following:
fastai logging example#
This example shows how to use NeptuneCallback
to log metadata as you train your model with fastai.
For how to customize the NeptuneCallback
, see the More options section.
-
Start a run:
- If you haven't set up your credentials, you can log anonymously:
neptune.init_run(api_token=neptune.ANONYMOUS_API_TOKEN, project="common/fastai-integration")
- If you haven't set up your credentials, you can log anonymously:
-
Initialize the Neptune callback:
-
To log metadata, pass the callback to the callback argument of the
learner()
orfit()
method: -
Run your script as you normally would.
To open the run and watch your model training live, click the Neptune link that appears in the console output.
Example link: https://app.neptune.ai/common/fastai-integration/e/FAS-61
Stop the run when done
Once you are done logging, you should stop the Neptune run. You need to do this manually when logging from a Jupyter notebook or other interactive environment:
If you're running a script, the connection is stopped automatically when the script finishes executing. In notebooks, however, the connection to Neptune is not stopped when the cell has finished executing, but rather when the entire notebook stops.
More options#
Customizing the callback#
You can change or extend the default behavior of NeptuneCallback()
by passing the callback a function to the constructor.
def cb(self):
self.run["sys/name"] = "Binary Classification"
self.run["seed"] = 1000
neptune_cb = NeptuneCallback(before_fit=cb)
Info
Specify the event to change. To learn more about events, see the fastai documentation .
Logging model architecture and weights#
To log your model weight files during single training or all training phases, add SaveModelCallback()
to the callbacks list of your learner()
or fit()
method.
Log every n epochs:
n = 4
learn = learner(
...,
cbs=[
SaveModelCallback(every_epoch=n),
NeptuneCallback(run=run, upload_saved_models="all"),
],
)
Best model:
Logging images and predictions#
In computer vision tasks, plotting images and predictions allows you to visually inspect the model's predictions.
You can log torch tensors and display them as images in the Neptune app:
from neptune.types import File
# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}
run["torch_tensor"].upload(File.as_image(img), description=description)
Logging after fitting or testing is finished#
You can use the created Neptune callback outside of the learner
context, which lets you log metadata after the fitting or testing methods are finished.
import torch
from fastai.callback.all import SaveModelCallback
from fastai.vision.all import (
ImageDataLoaders,
URLs,
accuracy,
resnet18,
untar_data,
vision_learner,
)
import neptune
from neptune.integrations.fastai import NeptuneCallback
# Create Neptune run
run = neptune.init_run()
# Create Neptune callback
neptune_callback = NeptuneCallback(run=run)
learn = vision_learner(
dls,
resnet18,
metrics=accuracy,
cbs=[SaveModelCallback(), neptune_callback],
)
# Run fit and test
learn.fit_one_cycle(1)
Log additional metadata after fit and test:
# Log images
batch = dls.one_batch()
for i, (x, y) in enumerate(dls.decode_batch(batch)): # (1)!
run["images/one_batch"].append(
File.as_image(x.as_subclass(torch.Tensor).permute(2, 1, 0) / 255.0),
name=f"{i}",
description=f"Label: {y}",
)
- Neptune supports torch tensors. fastai uses their own tensor type name TensorImage, so you have to convert it back to
torch.Tensor
.
Generic recipe for logging additional metadata:
Pickling the learner#
If the Learner
object includes the Neptune callback, pickling the entire learner won't work, as pickle can't access the local attributes of the Neptune client library through the NeptuneCallback
instance.
To pickle the learner:
- Remove the NeptuneCallback with the
remove_cb()
method (to avoid errors due to pickle's inability to pickle local objects, such as nested functions or methods). - Pickle the learner using the
export()
method. - Add the callback back in with the
add_cb()
method (for example, to log more training loops to the same run).
run = neptune.init_run()
...
pickled_learner = "learner.pkl"
neptune_cbk = NeptuneCallback(run=run, base_namespace="experiment")
learn = vision_learner(
dls,
resnet18,
metrics=accuracy,
cbs=[neptune_cbk],
)
learn.fit_one_cycle(1) # training
learn.remove_cb(neptune_cbk) # remove NeptuneCallback
learn.export(f"/content/test_model.pkl") # export learner
run[f"{base_namespace}/pickled_learner"].upload(pickled_learner) # upload learner to Neptune
learn.add_cb(neptune_cbk) # add NeptuneCallback back again
learn.fit_one_cycle(1) # continue training
Related
API reference ≫ fastai