fastai

fastai is a deep learning library that provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains and provides researchers with low-level components that can be mixed and matched to build new approaches.

What you will get with this integration?

With Neptune + fastai integration the following metadata is logged automatically for you:

  • hyper-parameters

  • Losses & metrics

  • training code(Python scripts or Jupyter notebooks) and git information

  • Dataset version

  • Model Configuration, Architecture, and Weights

You can also log other metadata types like interactive charts, video, audio, and more. See What can you log and display?

Installation

Before you start, make sure that:

Install fastai, and neptune-client[fastai]

Depending on your operating system open a terminal or CMD and run this command:

pip install fastai neptune-client[fastai]

For more help see installing neptune-client.

This integration is tested with fastai==2.3.1and neptune-client[fastai]==0.9.18.

Quickstart

Step 1: Initialize a Neptune Run

Place this code snippet at the beginning of your script or notebook cell

import neptune.new as neptune
run = neptune.init(project = '<YOUR_WORKSPACE/YOUR_PROJECT>',
api_token = '<YOURR_API_TOKEN>',
source_files=['*.py'])

This opens a new Run in Neptune that allows you to log various objects.

You need to authenticate yourself and open an existing project. Here is how:

You can use the api_token='ANONYMOUS' and project='common/fastai-integration' to explore without having to create a Neptune account.

Step 2: Add NeptuneCallback to learner or fit method:

To log metadata, you have to pass NeptuneCallback() to the callbacks argument of a learner or fit method.

Log a single training phase

learn = learner(...)
learn.fit(..., cbs = NeptuneCallback(run=run))

Log all training phases of the learner

learn = cnn_learner(..., cbs=NeptuneCallback(run=run))
learn.fit(...)
learn.fit(...)

Step 3: Stop Run

Once you are done logging, you should stop tracking the run using the stop() method. This is needed only while logging from a notebook environment. While logging through a script, Neptune automatically stops tracking once the script has completed execution.

run.stop()

Step 4: Run your training script or notebook cell and monitor your training in Neptune UI

Run your script or notebook cell as you normally would.

After running your script or notebook cell you will get a link similar to: https://app.neptune.ai/o/common/org/fastai-integration/e/FAS-61 with common/fastai-integration replaced by your project, and FAS-61 replaced by your run.

Click on the link to open the Run in Neptune to watch your model training live.

Initially, it may be empty but keep the tab with the Run open to see your experiment metadata update in real-time.

More options

Customizing NeptuneCallback

You can change or extend the default behavior of the NeptuneCallback() by passing callback a function to the constructor. For example:

def cb(self):
self.run['sys/name'] = 'Binary Classification'
self.run['seed'] = 1000
neptune_cb = NeptuneCallback(before_fit=cb)

Remember to specify the event you want to change. To learn more about the different events see the fastai Callback core page.

Log Model Architecture and Weights

  • Add SaveModelCallback()

    You can log your model weight files during single training or all training phases add SavemodelCallback() to the callbacks' list of your learner or fit method.

  • Log Every N epochs

n = 4
learn = learner(..., cbs=[
SaveModelCallback(every_epoch=n),
NeptuneCallback(run=run, upload_saved_models='all')])
  • Best Model

learn = learner(..., cbs=[
SaveModelCallback(), NeptuneCallback(run=run)])

Log Images and predictions

Plotting images and predictions is always helpful in computer vision tasks because it allows you to visually inspect the model's predictions.

With Neptune + fast.ai you can log torch tensors and they will be displayed as images in the Neptune UI.

from neptune.new.types import File
# Log image with predictions
img = torch.rand(30, 30, 3)
description = {"Predicted": pred, "Ground Truth": gt}
run['torch_tensor'].upload(File.as_image(img),
description = description)

How to ask for help?

Please visit the Getting help page. Everything regarding support is there.

Other pages you may like

You may want to check out other integrations from the Pytorch ecosystem: