Migrating to neptune.new
This migration guide is meant to give you a quick start on migrating your existing code to the new Python API and leverage the new way of tracking and logging metadata.
At some point in the future, we plan to make the new Python API the default one with a release of version 1.0 of the client library. However, we will be supporting the current Python API for a long time so that you can make the switch at a convenient moment. It’s worth the switch though, it’s quite awesome!

Hierarchical structure

Runs can be viewed as nested dictionary-like structures that you can define in your code. Thanks to this you can easily organize your metadata in a way that is most convenient for you.
The hierarchical structure that you apply to your metadata will be reflected later in the UI.
Run's structure consists of fields that are organized into namespaces. Field's path is a combination of the namespaces and its name - if you store a value 0.8 in a Float field in params namespace under name momentum - params/momentum will be its path. You can organize this way any type of metadata - images, parameters, metrics, scores, model checkpoint, CSV files, etc. Let look at the following code:
1
import neptune.new as neptune
2
run = neptune.init(project='my_workspace/my_project')
3
4
run['about/JIRA'] = 'NPT-952'
5
6
run['parameters/batch_size'] = 5
7
run['parameters/algorithm'] = 'ConvNet'
8
9
for epoch in range(100):
10
acc_value = ...
11
loss_value = ...
12
run['train/accuracy'].log(acc_value)
13
run['train/loss'].log(loss_value)
14
15
exp['trained_model'].upload('model.pt')
Copied!
The resulting structure of the run will be following:
1
'about':
2
'JIRA': String
3
'parameters':
4
'batch_size': Float
5
'algorithm': String
6
'train':
7
'accuracy': FloatSeries
8
'loss': FloatSeries
9
'trained_model': File
Copied!

Batch assign

You can assign multiple values to multiple fields in a batch by using a dictionary. You can use this method to quickly log all run parameters:
1
import neptune.new as neptune
2
run = neptune.init()
3
4
# Assign multiple fields from a dictionary
5
params = {'max_epochs': 10, 'optimizer': 'Adam'}
6
run['parameters'] = params
7
8
# Dictionaries can be nested
9
params = {'train': {'max_epochs': 10}}
10
run['parameters'] = params
11
# This will save value 10 under path "parameters/train/max_epochs"
Copied!

Initialization

Initialization got simpler a bit. You can replace your current code that probably looks like this:
1
# Legacy API
2
import neptune
3
4
neptune.init(project_qualified_name='my_workspace/my_project')
5
6
neptune.create_experiment(tags=['resnet'])
Copied!
With following:
1
# neptune.new API
2
import neptune.new as neptune
3
4
run = neptune.init(project='my_workspace/my_project', tags=['resnet'])
Copied!
The name of the environment variables didn't change. Instead of specifying project name or API token in the code, you can always provide them by setting NEPTUNE_PROJECT and NEPTUNE_API_TOKEN variables.

Parameters

With the legacy API, you had to pass parameters when creating an experiment and it was not possible to change them afterward. In addition, nested dictionaries were not fully supported.
1
# Legacy API
2
import neptune
3
4
PARAMS = {'epoch_nr': 100,
5
'lr': 0.005,
6
'use_nesterov': True}
7
8
neptune.init(project_qualified_name='my_workspace/my_project')
9
neptune.create_experiment(params=PARAMS)
Copied!
With the neptune.new API it's up to you when and where you want to specify parameters. Now you can also update them later:
1
# neptune.new API
2
import neptune.new as neptune
3
4
PARAMS = {'epoch_nr': 100,
5
'lr': 0.005,
6
'use_nesterov': True}
7
8
run = neptune.init(project='my_workspace/my_project')
9
10
run['my_params'] = PARAMS
11
12
# You can also specify parameters one by one
13
run['my_params/batch_size'] = 64
14
15
# Update lr value
16
run['my_params/lr'] = 0.007
Copied!
The artificial distinction between parameters and properties is also gone, and you can log and access them in one unified way.

Interacting with files (Artifacts)

You are no longer bound to store files only in the artifacts folder. Whether it's a model checkpoint, custom interactive visualization, or audio file you can track it in the same hierarchical structure with the rest of the metadata:
Legacy API
neptune.new API
neptune.log_artifact('model_viz.png')
run['model/viz'].upload('model_viz.png')
neptune.log_artifact('model.pt')
run['trained_model'].upload('model.pt')
neptune.download_artifact('model.pt')
run['trained_model'].download()
Note that in the legacy API, artifacts could have been explained as a file system-like functionality and were mimicking it very closely. What you did upload was saved under the same name with the extension etc.
The mental model behind the new Python API is more database-like. We have a field (with a path) and under it, we store some content - Float, String, series of String, or File. In this model extension is part of the content. For example, if under the path 'model/last'you upload a .ptfile, the file will be visible as 'last.pt'in the UI.
When it's unambiguous we implicitly convert an object to a File and there is no need for explicit conversion. E.g. for Matplotlib charts, you can writerun['conf_matrix'].upload(plt_fig)instead ofrun['conf_matrix'].upload(File.as_image(plt_fig)).

neptune-contrib

We've integrated neptune-contrib file-related functionalities into the core library (in fact there is no more neptune-contrib - see Integrations). The conversion methods are available as File factory methods:

Interactive charts (Altair, Bokeh, Plotly, Matplotlib)

Legacy API
neptune.new API
from neptunecontrib.api import log_chart
log_chart('int_chart', chart)
from neptune.new.types import File
run['int_chart'].upload(File.as_html(chart))

Pandas DataFrame

Legacy API
neptune.new API
from neptunecontrib.api import log_table
log_table('pred_df', df)
from neptune.new.types import File
run['pred_df'].upload(File.as_html(df))

Audio & Video

We've expanded the range of files that are natively supported in the Neptune UI, so for audio and video files you no longer need to use conversion methods:
Legacy API
neptune.new API
from neptunecontrib.api import log_audio
log_audio('sample.mp3')
run['sample'].upload('sample.mp3')
from neptunecontrib.api import log_video
log_video('sample.mp4')
run['sample'].upload('sample.mp4')

Pickled objects

Legacy API
neptune.new API
from neptunecontrib.api import log_pickle
log_pickle('model.pkl', model)
from neptune.new.types import File
run['model'].upload(File.as_pickle(model))

HTML strings

Legacy API
neptune.new API
from neptunecontrib.api import log_html
log_pickle('custom_viz', html_string)
from neptune.new.types import File
run['custom_viz'].upload(File.from_content(model), extension='html')

Scores and metrics

Logging metrics is quite similar, except that you can now organize them in a hierarchical structure:
Legacy API
New API
neptune.log_metric('acc', 0.97)
run['acc'].log(0.97)
neptune.log_metric('train_acc', 0.97)
run['train/acc'].log(0.97)
neptune.log_metric('loss', 0.8)
run['key'].log()
To log scores you don't need to use Series fields anymore as you can track single values anytime, anywhere:
Legacy API
New API
neptune.log_metric('final_accuracy', 0.8)
run['final_accuracy'] = 0.8

Text and image series

Similar changes need to be applied for text and image series:
Legacy API
New API
neptune.log_text('train_log', custom_log_msg)
run['train/log'].log(custom_log_msg)
neptune.log_image('misclasified', filepath)
run['misclasified'].log(File(filepath))
neptune.log_image('pred_dist', hist_chart)
run['pred_dist'].log(hist_chart)
To add a single image that you want to view with the rest of the metrics you no longer need to use Series fields. As you control whether they are grouped in the same namespace you can upload it as a single File field.
Legacy API
New API
neptune.log_image('train_hist', hist_chart)
run['train/hist'].upload(hist_chart)

Integrations

We've re-designed how our integrations work so that:
  • It's more tightly integrated with our base library and the API is more unified.
  • You can update the version of each integration separately in case of dependency conflicts.
  • There is a smaller number of dependencies if you are not using all integrations.
There is no longer neptune-contrib library for the new Python API - each integration has now two parts:
  • Boilerplate code for ease of use in the main library import neptune.new.integrations.framework_name
  • Actual integration that can be installed separately: pip install neptune-framework-name or as an extra together with neptune-client pip install "neptune-client[framework-name]"
Existing integrations from neptune-contrib are still fully functional. You can use them both with projects using the previous structure and the new structure. However, integrations from neptune-contrib are using legacy Python API while the new integrations are re-written to fully use possibilities provided by the new Python API and achieve better metadata organization.
File-related neptune-contrib functionalities are now part of the core library. Read more here.
You can read in detail about each integration in the Integrations section.
We are still re-writing some of the integrations using the new Python API and they should be available in the next few weeks. In the meantime, you can use the previous version of the integration built using the legacy Python API.
Let's look at how this looks like in the case of TensorFlow/Keras integration.

Installation

Legacy API
neptune.new API
pip install neptune-contrib
pip install "neptune-client[tensorflow-keras]"
or
pip install neptune-tensorflow-keras

Legacy API usage

1
import neptune
2
3
neptune.init(project_qualified_name='my_workspace/my_project')
4
5
from neptunecontrib.monitoring.keras import NeptuneMonitor
6
7
model.fit(x_train, y_train,
8
epochs=5,
9
batch_size=64,
10
callbacks=[NeptuneMonitor()])
Copied!

neptune.new API usage

1
import neptune.new as neptune
2
3
run = neptune.init(project='my_workspace/my_project')
4
5
from neptune.new.integrations.tensorflow_keras import NeptuneCallback
6
7
model.fit(x_train, y_train,
8
epochs=5,
9
batch_size=64,
10
callbacks=[NeptuneCallback(run=run)])
Copied!

Tags

Interaction with tags is quite similar - the tags are stored as a StringSet field under sys/tags path.
Code using legacy API:
1
# Legacy API
2
import neptune
3
4
neptune.init(project_qualified_name='my_workspace/my_project')
5
6
neptune.create_experiment(params=PARAMS,
7
tags=['maskRCNN'])
8
9
neptune.append_tag('prod_v2.0.1')
10
neptune.append_tags('finetune', 'keras')
Copied!
Code using the neptune.new API:
1
# neptune.new API
2
import neptune.new as neptune
3
4
run = neptune.init(project='my_workspace/my_project',
5
tags=['maskRCNN'])
6
7
run["sys/tags"].add('prod_v2.0.1')
8
run["sys/tags"].add(['finetune', 'keras'])
Copied!

Logging to a project with legacy Runs

If you have a project with a set of Runs (previously Experiments) that were created using the Legacy API and you would like to add new Runs using the neptune.new API there few things to remember.

Metadata structure

Each type of metadata from previous version (parameters, properties, logs, artifacts, and monitoring) are mapped into a separate fixed namespace. If you want to be able to compare those you need to make sure the fields path is backward compatible:
Legacy API
neptune.new API
neptune.create_experiment(params=params_dict)
run['parameters'] = params_dict
neptune.set_property('SEED', 2458)
run['properties/SEED'] = str(2458)
neptune.log_metric('acc', 0.98)
run['logs/acc'].log(0.98)
neptune.log_artifact('model.pt', 'model.pt')
run['artifacts/model'].upload('model.pt')

Data types

Neptune is type-sensitive. For example, a column in the Runs table represents a field with a specific path and a specific type. In particular FloatSeries and Float are two different types and Neptune by default doesn't know how to compare them.
With the Legacy API, it wasn't possible to set Float fields outside of create_experiment() function and a common practice was to store those as FloatSeries with one entry through the use of log_metric() function. To achieve backward compatibility you need to create a FloatSeries with the neptune.new API as well, by using .log() .
Properties set through Legacy API are always String fields. To achieve backward compatibility use the assign with an explicit casting to str if you are not 100% sure that the resulting field will be a string.
Last modified 3mo ago