Integrating Neptune into your codebase¶
Adding Neptune to your workflow is a really simple and quick process. We describe major logging features in the step by step guide below.
To make things even easier we have created integrations with most major ML frameworks and open-source experiment tracking tools.
Jump to the relevant section:
Using integrations with ML frameworks¶
Neptune supports any machine learning framework but there are a lot of integrations with particular frameworks that will get you started faster.
Popular integrations include:
Check out the full list of integrations.
Migrating from other experiment tracking tools¶
Neptune has utilities that let you use other open-source experiment tracking tools together with Neptune They also make the migration from those tools easy and quick.
Neptune integrates with the following experiment tracking frameworks:
Not using Python¶
If you are not using Python no worries, you can still log experiments to Neptune.
Read our guides on:
How to use Neptune client for R
How to log experiments for any other language
How to connect Neptune to your codebase step by step¶
Adding Neptune is a simple process that only takes a few steps. We’ll go through those one by one.
Before you start¶
Make sure you meet the following prerequisites before starting:
Have Python 3.x installed
Step 1: Connect Neptune client to your script¶
import neptune
neptune.init(project_qualified_name='shared/onboarding',
api_token='ANONYMOUS',
)
You need to tell Neptune who you are and where you want to log things.
To do that you should specify:
project_qualified_name=USERNAME/PROJECT_NAME
: Neptune username and projectapi_token=YOUR_API_TOKEN
: your Neptune API token.
Note
If you followed suggested prerequisites:
You can skip api_token
and change the project_qualified_name
to your USERNAME
and PROJECT_NAME
neptune.init(project_qualified_name='USERNAME/PROJECT_NAME')
Step 2. Create an experiment and log parameters¶
PARAMS = {'lr': 0.1, 'epoch_nr': 10, 'batch_size': 32}
neptune.create_experiment(name='great-idea', params=PARAMS)
This opens a new “experiment” namespace in Neptune to which you can log various objects.
It also logs your PARAMS
dictionary with all the parameters that you want to keep track of.
Note
Right now parameters can only be passed at experiment creation.
Step 3. Add logging of training metrics¶
neptune.log_metric('loss', 0.26)
The first argument is the name of the log. You can have one or multiple log names (like ‘acc’, ‘f1_score’, ‘log-loss’, ‘test-acc’). The second argument is the value of the log.
Typically during training there will be some sort of a loop where those losses are logged.
You can simply call neptune.log_metric
multiple times on the same log name to log it at each step.
for i in range(epochs):
...
neptune.log_metric('loss', loss)
neptune.log_metric('metric', accuracy)
Note
You can specifically log value at given step by using x
and y
arguments.
neptune.log_metric('loss', x=12, y=0.32)
Tip
You may want to read our articles on:
Step 4. Add logging of test metrics¶
neptune.log_metric('test-accuracy', 0.82)
You can log metrics in the same way after the training loop is done.
Note
You can also update experiments after the script is done running.
Read about updating existing experiments.
Step 5: Add logging of performance charts¶
neptune.log_image('predictions', 'pred_img.png')
neptune.log_image('performance charts', fig)
Tip
There are many other object that you can log to Neptune. You may want to read our articles on:
Step 6: Add logging of model binary¶
neptune.log_artifact('model.pkl')
You save your model to a file and log that file to Neptune.
Tip
There is a helper function in neptune-contrib called log pickle for logging picklable Python objects without saving them to disk.
It works like this:
from neptunecontrib.api import log_pickle
log_pickle(model)