Log model building metadata


Adding Neptune is a simple process that only takes a few steps. We’ll go through those, one by one.

Before you start

Make sure you meet the following prerequisites before starting:

Step 1: Connect Neptune to your script

At the top of your script add

import neptune.new as neptune
run = neptune.init(project='common/quickstarts',

This creates a new run in Neptune to which you can log metadata.

You need to tell Neptune who you are and where you want to log things. To do that you specify:

  • project=my_workspace/my_project: your workspace name and project name,

  • api_token=YOUR_API_TOKEN : your Neptune API token.

If you configured your Neptune API token correctly, as described in this docs page, you can skip 'api_token' argument.

Runs can be viewed as dictionary-like structures - namespaces - that you can define in your code. You can apply hierarchical structure to your metadata that will be reflected in the UI as well. Thanks to this you can easily organize your metadata in a way you feel is most convenient.

Step 2. Log parameters

PARAMS = {'lr': 0.1, 'epoch_nr': 10, 'batch_size': 32}
run['parameters'] = PARAMS

It logs your PARAMS dictionary with all the parameters that you want to keep track of.

See also:

Step 3. Add logging for metrics and losses

To log a metric or loss during training you should use:

loss = ...

Few explanations here:

  • "train/loss" is a name of the log with hierarchical structure.

  • "train/loss" is a series of values - you can log multiple values to this log.

  • You can have one or multiple log names like 'train/acc', 'val/f1_score’, ‘train/log-loss’, ‘test/acc’).

  • argument of the log() method is the actual value you want to log.

Typically during training there will be some sort of a loop where those losses are logged. You can simply call run["train/loss"].log(loss) multiple times at each step.

for i in range(epochs):

Step 4. Add logging of test score

run['test/acc'] = 0.76

In this way you will log single value, not a series of values. Again, you can make use of the hierarchical structure of the run to organize scores - 'test/acc'.

You can also update runs after the script is done running:

Read about updating existing runs.

Step 5: Add logging of performance charts

# confusion_matrix is matplotlib figure object
confusion_matrix = ...
# log image predictions
img_pred = neptune.types.Image(path_to_image)

Step 6. Add logging of model files


Log your model with .upload method. Just pass the path to the file you want to log to Neptune.

There are many other object that you can log to Neptune: