Logging experiment data¶
During machine learning experimentation, you need to keep track of many different types of meta-data. Neptune helps you do that by logging, keeping track of and visualizing it.
You can track many different types of data to the experiment. It can be metric, loss, image, interactive visualization, model checkpoint, pandas DataFrame and many more.
Simply check what you can log section below for a complete list.
On this page:
Basics of logging¶
Logging experiments data to Neptune is simple and straightforward.
Let’s create minimal code snippet that logs single value to the experiment: 'acc'=0.95
.
import neptune
# Set project
neptune.init('my_workspace/my_project')
# Create experiment
neptune.create_experiment()
# Log 'acc' value 0.95
neptune.log_metric('acc', 0.95)
Above snippet sets project, creates experiment and log one value to it. When script ends, the experiment is closed automatically. As a result you have new experiment with one value in one metric (‘acc’=0.95).
Everything that is evaluated after neptune.create_experiment()
and before the end of the script or when you call neptune.stop
(reference docs: stop()
) can be logged to the experiment.
What objects can you log to Neptune¶
Neptune supports logging many different types of data. Here, you can find all of them listed and described.
Note
Remember to set project using init()
and create_experiment()
before you start logging.
Type of data |
More options |
---|---|
n.a. |
|
n.a. |
|
n.a. |
|
n.a. |
|
n.a. |
|
n.a. |
|
n.a. |
|
n.a. |
|
Metrics¶
Log metric to neptune using log_metric()
.
# Single value
neptune.log_metric('test_accuracy', 0.76)
# Accuracy per epoch
for epoch in range(epoch_nr):
epoch_accuracy = ...
neptune.log_metric('epoch_accuracy', epoch_accuracy)

Metric can be accuracy, MSE or any numerical value. All metrics are visualized as charts in the experiment. You can also check and download raw data from the logs section.
Also, you can log with explicit step number, like this:
# Log batch accuracy per epoch
for i, batch in enumerate(train_data):
batch_acc = ...
neptune.log_metric(name='batch_accuracy', x=i, y=epoch_accuracy)
In the above snippet, x
argument must be strictly increasing.
Note
You can create as many metrics as you wish.
Note
Download metrics as pandas DataFrame for further analysis locally. Here is how to do it.
Parameters¶
Define parameters as Python dictionary and pass to the create_experiment()
method to log them.
# Define parameters
PARAMS = {'batch_size': 64,
'dense_units': 128,
'dropout': 0.2,
'learning_rate': 0.001,
'optimizer': 'Adam'}
# Pass parameters to create experiment
neptune.create_experiment(params=PARAMS)
![]() |
---|
Parameters in the experiment view |
![]() |
---|
Parameters in the experiment dashboard |
You can use them later to analyse or compare experiments. They are displayed in the parameters section of the experiment. Moreover every parameter can be displayed as a column on the experiment dashboard (look for green columns).
Note
Experiment parameters are read-only. You cannot change or update them during or after the experiment.
Code¶
Neptune supports code versioning. There are a few ways to do that.
Track your git information¶
If you start an experiment from a directory that is a part of the git repo, Neptune will automatically find the .git
directory and log some information from it.
It creates a summary in the details section with:
status if repo has uncommitted changed (dirty flag),
commit information (id, message, author, date),
branch,
remote address to your experiment,
git checkout command with commit.

Code Snapshot¶
Neptune automatically snapshots code when you create_experiment()
.
By default, it will only save the entrypoint file (main.py
if you run python main.py
) but you can pass a list of files or regex (like: ‘*.py’) to specify more files.
# Snapshot model.py and prep_data.py
neptune.create_experiment(upload_source_files=['model.py', 'prep_data.py'])
# Snapshot all python files and 'config.yaml' file
neptune.create_experiment(upload_source_files=['*.py', 'config.yaml'])

You will have all sources in the source code section of the experiment. Neptune also logs the entrypoint file so that you have all the information about the run sources.
Warning
When using pattern expansion, such as '*.py'
, make sure that your expression does not log too many files or non-source code files. For example, using '*'
as a pattern will upload all files and directories from the cwd. It may result in logging files that you did not want to upload and to clutter your storage.
Notebook Code Snapshot¶
Neptune auto-snapshots your notebook every time you create experiment in that notebook.
Another option to log notebook checkpoint is by clicking a button in the Jupyter or JupyterLab UI. It is useful to log notebook with EDA or manual model analysis.
To get started, install notebook extension, then go to the Keeping track of Jupyter Notebooks guide that will explain everything.

Images¶
Log images to Neptune. You can log either single image or series of them, using log_image()
. Several data formats are available:
In all cases you will have images in the logs section of the experiment, where you can browse and download them.
You can log unlimited number of images either in the single log or in the multiple image logs. Simply use the same log name, for example 'misclassified_images'
- first argument of the log_image()
.
Note
Single image size limit is 15MB. If you work with larger files, you can log them using log_artifact()
. Check Files section for more info.
Image file¶
You can log image files directly from disk, by using log_image()
.
Log single image from disk.
neptune.log_image('bbox_images', 'train-set/image.png')
Log series of images in for
loop.
for name in misclassified_images_names:
y_pred = ...
y_true = ...
neptune.log_image('misclassified_images',
'misclassified_images/{name}.png`.format(name),
description='y_pred={}, y_true={}'.format(y_pred, y_true)

Matplotlib¶
Log Matplotlib figure (matplotlib.figure.Figure) as an image, by using log_image()
.
# Import matplotlib
import matplotlib.pyplot as plt
# Generate figure
fig = plt.figure(figsize=(7, 9))
...
# Log figure to experiment
neptune.log_image('matplotlib-fig', fig, image_name='streamplot')

You will have Matplotlib figure in the logs section of the experiment, where you can browse and download them.
Note
Check Interactive Matplotlib logging to see how to log the same matplotlib figure and have it turned interactive in Neptune.
PIL¶
Log PIL image right from the memory, by using log_image()
.
# Import PIL
from PIL import Image
# Load image
image = Image.open('Representation-learning.jpg')
# Log image to experiment
neptune.log_image('PIL-image', image, image_name='representation learning', description='Example PIL image in experiment')

You will have images in the logs section of the experiment, where you can browse and download them.
NumPy¶
Log NumPy array (2d or 3d) right from the memory, and have it visualized as image, by using log_image()
.
# Import NumPy
import numpy as np
# Prepare some NumPy arrays
for j in range(5):
array = ...
# Log them as images
neptune.log_image('NumPy array as image',
array,
image_name='array-{}'.format(j), description='Example NumPy as image')

You will have NumPy images in the logs section of the experiment, where you can browse and download them.
Interactive charts¶

You can log interactive charts and they will be rendered interactively in the artifacts section under the charts/my_chart.html
. Common visualization libraries are supported:
Matplotlib -> we turn it interactive automatically
Note
For a full screen view, you can open visualization in the new browser tab, by clicking on the “arrow-pointing-top-right” icon, located right above your visualisation:

Matplotlib¶
Log Matplotlib figure (matplotlib.figure.Figure) as an interactive chart, by using log_chart()
.
Note
This option is tested with matplotlib==3.2.0
and plotly==4.12.0
. Make sure that you have correct versions installed. See: plotly installation guide.
# Import matplotlib and log_chart
import matplotlib.pyplot as plt
from neptunecontrib.api import log_chart
# Generate figure
fig = plt.figure(figsize=(7, 9))
...
# Log figure to experiment
log_chart('matplotlib-interactive', fig)

Interactive chart will appear in the artifacts section, with path charts/my_figure.html
(in the snippet above: charts/matplotlib-interactive.html
) where you can explore, open in full screen and download it.
Note
Check images logging to see how to log matplotlib figure as an image.
Altair¶
Log Altair chart as an interactive chart, by using log_chart()
.
# Import altair and log_chart
import altair as alt
from neptunecontrib.api import log_chart
# Generate figure
alt_chart = alt.Chart(...)
...
# Log figure to experiment
log_chart(name='altair-interactive', chart=alt_chart)

Interactive chart will appear in the artifacts section, with path charts/my_figure.html
(in the snippet above: charts/altair-interactive.html
) where you can explore, open in full screen and download it.
Note
You need to install plotly to log Altair as interactive chart. See: plotly installation guide.
Bokeh¶
Log Bokeh chart as an interactive chart, by using log_chart()
.
# Import bokeh and log_chart
from bokeh.plotting import figure
from neptunecontrib.api import log_chart
# Generate figure
bokeh_chart = figure(...)
...
# Log figure to experiment
log_chart(name='bokeh-interactive', chart=bokeh_chart)

Interactive chart will appear in the artifacts section, with path charts/my_figure.html
(in the snippet above: charts/bokeh-interactive.html
) where you can explore, open in full screen and download it.
Note
You need to install plotly to log bokeh as interactive chart. See: plotly installation guide.
Plotly¶
Log plotly chart as an interactive chart, by using log_chart()
.
# Import plotly and log_chart
import plotly.express as px
from neptunecontrib.api import log_chart
# Generate figure
plotly_fig = px.histogram(...)
...
# Log figure to experiment
log_chart(name='plotly-interactive', chart=plotly_fig)

Interactive plotly chart will appear in the artifacts section, with path charts/my_figure.html
(in the snippet above: charts/plotly-interactive.html
) where you can explore, open in full screen and download it.
Note
You need to install plotly to enable to feature. See: plotly installation guide.
Text¶
Log text information to the experiment by using log_text()
.
neptune.log_text('my_text_data', 'text I keep track of, like query or tokenized word')

You will have it in the logs section of the experiment, where you can browse and download it.
Note
Single line of text log is limited to 1k characters. At the same time number of lines is not limited.
Hardware consumption¶
Automatically monitor hardware utilization for your experiments:
CPU (average of all cores),
memory,
for each GPU unit - memory usage and utilization.

All that information is visualized in the monitoring section. You can turn off this feature when you create_experiment()
.
# Turn off hardware monitoring
neptune.create_experiment(send_hardware_metrics=False)
As a result hardware consumption is not being tracked.
Note
To enable this feature you need to install psutil
. Check our installation guide for more info. It will take like 1 minute to install.
Warning
If you see the Info (NVML): NVML Error: NVML Shared Library Not Found - GPU usage metrics may not be reported.
your GPU consumption is not being logged.
It means that either:
there are no GPUs on your machine
your NVIDIA NVML library is not installed or configured properly. See how to install and configure NVML.
Logging GPU on Windows
On Windows, Neptune searches for the nvml.dll
file in the standard locations:
C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll
C:\Windows\System32\nvml.dll
If you are having trouble logging GPU metrics on Windows, please check that your NVML installation is correct and that you have the nvml.dll
file in either of those locations.
Alternatively, you can set a custom location of nvml.dll
on Windows by setting the NVML_DLL_PATH
environment variable.
Experiment information¶
To better describe an experiment you can use ‘name’, ‘description’ and ‘tags’.
Experiment name¶
You can add name to the experiment when you create_experiment()
. Try to keep it short and descriptive.
neptune.create_experiment(name='Mask R-CNN with data-v2')

Experiment name appears in the details section and can be displayed as a column on the experiment dashboard.
You can edit ‘name’ directly in the UI.
Note
You can search for an experiment by it’s name. Here is how: Searching and filtering experiments.
Experiment description¶
You can add longer note to the experiment when you create_experiment()
.
neptune.create_experiment(description='neural net trained on Fashion-MNIST with high LR and low dropout')

Experiment description appears in the details section and can be displayed as a column on the experiment dashboard.
You can edit ‘description’ directly in the UI.
Note
You can use info in the description to later search for an experiment in the UI. Here is how: Searching and filtering experiments.
Experiment tags¶
You can add tags to the experiment when you create_experiment()
or during an experiment using append_tag()
.
# Add tags at the beginning
neptune.create_experiment(tags=['classification', 'pytorch', 'prod_v2.0.1'])
# Append new tag during experiment (it must be running)
neptune.append_tag('new-tag')

Tags are convenient way to organize or group experiments. They appear in the details section and can be displayed as a column on the experiment dashboard. Tags are editable in the UI.
You can easily remove tags programmatically if you wish using remove_tag()
# Assuming experiment has tags: `['tag-1', 'tag-2']`.
experiment.remove_tag('tag-1')
Note
You can quickly filter by tag by clicking on it in the experiments dashboard. Check Searching and filtering experiments guide for more options.
Properties¶
Log 'key': 'value'
pairs to the experiment. Those could be data versions, URL or path to the model on your filesystem, or anything else that fit the generic 'key': 'value'
scheme.
You can do it when you create_experiment()
:
# Pass Python dictionary
neptune.create_experiment(properties={'data_version': 'fd5c084c-ff7c',
'model_id': 'a44521d0-0fb8'})
Another option is to add property during an experiment (it must be running), by using set_property()
.
# Single key-value pair at a time
neptune.set_property('model_id', 'a44521d0-0fb8')

What distinguishes them from parameters is that they are editable after experiment is created.
They appear in the details section and can be displayed as a column on the experiment dashboard.
Note
You can remove_property()
programmatically.
Data versions¶
Log data version or dataset hash to Neptune as a property.
# Prepare dataset
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
test_images = test_images / 255.0
# Log data version as experiment property
neptune.set_property('train_images_version', hashlib.md5(train_images).hexdigest())
neptune.set_property('test_images_version', hashlib.md5(test_images).hexdigest())

In this way you can keep track on what data given model was trained. Data version will appear in the details section and can be displayed as a column on the experiment dashboard.
You can also use :meth`~neptunecontrib.versioning.data.log_data_version` to log data version from filepath:
from neptunecontrib.versioning.data import log_data_version
FILEPATH = '/path/to/data/my_data.csv'
neptune.create_experiment()
log_data_version(FILEPATH)
If your data is on AWS S3, use log_s3_data_version()
to log data version of S3 bucket to Neptune:
from neptunecontrib.versioning.data import log_s3_data_version
BUCKET = 'my-bucket'
PATH = 'train_dir/'
neptune.create_experiment()
log_s3_data_version(BUCKET, PATH)
Files¶
Log any file or directory you want by using log_artifact()
. This includes model_checkpoint, csv, binaries, or anything else.
# Log file
neptune.log_artifact('/data/auxiliary-data.zip')
# Log directory
neptune.log_artifact('cv-models')

You can browse and download files in the artifacts section of the experiment.
Note
Keep an eye on your artifacts as they may consume a lot of storage. You can always remove some by using delete_artifacts()
.
Warning
Make sure that you define the correct path to files that you want to upload. If you pass the directory, then all its content is uploaded, resulting in unintended logging of a large amount of data and cluttering your storage.
Model checkpoints¶
Log model checkpoints as artifacts, using log_artifact()
.
# Log PyTorch model weights
my_model = ...
torch.save(my_model, 'my_model.pt')
neptune.log_artifact('my_model.pt', 'model_checkpoints/my_model.pt')

This technique lets you save model from any deep learning framework. Model checkpoint will appear in the artifacts section in the ‘model_checkpoints’ directory: example checkpoints.
HTML objects¶
Log HTML files, using log_html()
.
# Import from neptune contrib
from neptunecontrib.api import log_html
# Log HTML to experiment
# html is a valid HTML string
html = str(...)
log_html('go_to_docs_button', html)

HTML will appear in the artifacts section, with path html/my_file.html
. They are interactive in Neptune.
Video¶
Log video files and watch them right in the artifacts section of the experiment. Use log_video()
to do it.
# Import log_video
from neptunecontrib.api.video import log_video
# Log video file from disk
log_video('/path/to/video-file.mp4')

As a result, video player is rendered in the artifacts section under path video/my_video.html
(in the snippet above: video/video-file.html
) where you can watch, open in full screen and download it.
Audio¶
Log audio files and listen to them directly from the artifacts section of the experiment. Use log_audio()
to do it.
# Import log_audio
from neptunecontrib.api.audio import log_audio
# Log audio file from disk
log_audio('/path/to/audio-file.mp3')

As a result, player is rendered in the artifacts section under path audio/my_audio.html
(in the snippet above: audio/audio-file.html
) where you can listen to and download it.
Tables¶
When you log tabular data, such as csv or DataFrame, Neptune will display it as table automatically.
pandas¶
Log pandas DataFrame and have it visualized as table. Use log_table()
to do it.
# Import log_table
from neptunecontrib.api.table import log_table
# Create pandas DataFrame
df = pd.DataFrame(..)
...
# Log DataFrame
log_table('dataframe-in-experiment', df)

DataFrame is displayed in the artifacts section under path tables/my_dataframe.html
(in the snippet above: tables/dataframe-in-experiment.html
) where you can inspect entries and download data.
csv¶
Log csv files and have them visualized as table. Use log_artifact()
to do it.
# Log csv file
neptune.log_artifact('/path/to/test_preds.csv')

Table rendered from the csv data is displayed in the artifacts section where you can inspect entries and download data.
Python objects¶
Some Python objects are handled automatically.
Pickled object¶
You can log pickled Python object, by using log_pickle()
. It gets an object, pickle it and log to Neptune as file.
Log pickled random forest:
from neptunecontrib.api import log_pickle
RandomForest = ...
log_pickle('rf.pkl', RandomForest)

Note
You can download picked file as Python object using get_pickle()
.
Explainers (DALEX)¶
Log Dalex explainer to Neptune and inspect them interactively. Use log_explainer()
to do it.
# Import dalex explainer
neptunecontrib.api.explainers import log_explainer
# Train your model
clf = ...
X = ...
y = ...
clf.fit(X, y)
# Create dalex explainer
expl = dx.Explainer(clf, X, y, label="Titanic MLP Pipeline")
# Log explainer
log_explainer('explainer.pkl', expl)

As a result, pickled explainer and charts will be available in the artifacts section of the experiment.
Logging with integrations¶
Besides logging using Neptune Python library, you can also use integrations that let you log relevant data with almost no code changes. Have a look at Integrations page for more information or find your favourite library in one of the following categories:
Advanced¶
Using Project and Experiment objects explicitly¶
If you work with large codebase, you may want to switch from using global neptune
calls like neptune.create_experiment()
or log_metric()
to passing objects around, either Project
or Experiment
.
Let’s revisit minimal code snippet from the basics section. Modify it to use Project
and Experiment
objects and log a bit more data.
# Import libraries
import neptune
from neptunecontrib.api.table import log_table
# Set project
project = neptune.init('my_workspace/my_project')
# Use 'project' to create experiment
my_exp = project.create_experiment(name='minimal-example-exp-proj',
tags=['do-not-remove'])
# Log using my_exp
my_exp.log_metric(...)
my_exp.log_image(...)
my_exp.log_text(...)
# Logging with neptunecontrib methods is a bit different
df = ...
fig = ...
log_table(name='pandas_df', table=df, experiment=my_exp)
log_chart('matplotlib-interactive', fig, my_exp)
Few explanations
Use instance of the
Project
object returned by theinit()
to create new experiment.Next,
create_experiment()
returnsExperiment
object that we use for logging purposes.Notice that logging with neptunecontrib
api
is slightly different as you passExperiment
object as an argument.
Pass Experiment object around to log from multiple Python files¶
You can pass Experiment
object around and use it to populate functions’ parameters and perform logging from multiple Python files.
Let’s create a recipe for that:
main.py
import neptune
from utils import log_images_epoch, log_preds_as_table
# Set project
project = neptune.init('my_workspace/my_project')
# Create experiment
my_exp = project.create_experiment(...)
# Log metrics in the same file
my_exp.log_metric('acc', 0.95)
my_exp.log_metric('acc', 0.99)
# Log by using imported function, pass 'my_exp'
log_images_epoch(experiment=my_exp)
log_preds_as_table(experiment=my_exp)
utils.py
from neptunecontrib.api.table import log_table
# 'experiment' is an instance of the Experiment object
def log_images_epoch(experiment):
image1 = ...
image2 = ...
experiment.log_image('PIL-image', image1)
experiment.log_image('NumPy-image', image2)
# 'experiment' is an instance of the Experiment object
def log_preds_as_table(experiment):
preds_df = ...
log_table(name='test_preds_df', table=preds_df, experiment=experiment)
In this way you can work with larger codebase and use logging from multiple Python files.
Logging to multiple experiments in one script¶
You can freely create multiple experiments in the single script and log to them separately. General recipe is very straightforward, as you simply create multiple Experiment
objects - one for each experiment.
Create three experiments and log metric to each separately:
import neptune
# Set project
project = neptune.init('my_workspace/my_project')
# Create three experiments
my_exp1 = project.create_experiment(name='1st')
my_exp2 = project.create_experiment(name='2nd')
my_exp3 = project.create_experiment(name='3rd')
# Log metric to my_exp1
for batch in data:
loss = ...
my_exp1.log_metric('mean_squared_error', loss)
for batch in data:
loss = ...
my_exp2.log_metric('mean_squared_error', loss)
for batch in data:
loss = ...
my_exp3.log_metric('mean_squared_error', loss)
neptune.log_text('info', 'This goes to the most recently created experiment, here "my_exp3".')
Few remarks:
We log MSE, by using the
my_exp1
,my_exp2
andmy_exp3
. In this way you can log to many experiments from the same Python script.If you use global call
neptune.log_X()
, then you only log to the most recently created experiment.