How to add Neptune to your code
Adding Neptune to your code is a really simple and quick process. You don't need to modify your existing code almost at all, simply add few lines to track metadata relevant to you. We’ll go through this process step by step.
This how-to guide covers the most important Neptune functionalities, but you can log many different types of metadata - audio, video, interactive charts, dataset info, and many others. To learn more read What can you log and display?

How to add Neptune to your code - step by step

Before you start

Make sure you meet the following prerequisites before starting:

Step 1: Import Neptune and create a Run

1
import neptune.new as neptune
2
from neptune.new.types import File
3
4
run = neptune.init(
5
api_token='<your_api_token>',
6
project='<your_project_name>',
7
)
Copied!
This code imports the Neptune client library and creates a Run in the project of your choice. This will be your gateway to log metadata to Neptune.

Step 2: Log hyperparameters

1
parameters = {
2
'dense_units': 128,
3
'activation': 'relu',
4
'dropout': 0.23,
5
'learning_rate': 0.15,
6
'batch_size': 64,
7
'n_epochs': 30,
8
}
9
10
run['model/parameters'] = parameters
Copied!
If you have parameters in form of a dictionary you can log them to Neptune in batch. It will create a field with the appropriate type for each dictionary entry.
You can update the hyperparameters or add new ones later in the code:
1
# Add additional parameters
2
run['model/parameters/seed'] = RANDOM_SEED
3
4
# Update parameters e.g. after triggering an early stopping
5
run['model/parameters/n_epochs'] = epoch
Copied!

Step 3: Log training metrics

Any framework
TF/Keras
XGBoost
LightGBM
scikit-learn
1
for epoch in range(parameters['n_epochs']):
2
[...] # My training loop
3
4
run['train/epoch/loss'].log(loss)
5
run['train/epoch/accuracy'].log(acc)
Copied!
1
from neptune.new.integrations.tensorflow_keras import NeptuneCallback
2
3
model.fit(
4
x_train,
5
y_train,
6
callbacks=[NeptuneCallback(run=run)],
7
)
Copied!
1
from neptune.new.integrations.xgboost import NeptuneCallback
2
3
xgb.train(
4
params=parameters,
5
dtrain=dtrain,
6
callbacks=[NeptuneCallback(run=run)],
7
)
Copied!
1
from neptune.new.integrations.lightgbm import NeptuneCallback
2
3
gbm = lgb.train(
4
parameters,
5
lgb_train,
6
callbacks=[NeptuneCallback(run=run)],
7
)
Copied!
1
import neptune.new.integrations.sklearn as npt_utils
2
3
run['cls_summary'] = npt_utils.create_classifier_summary(gbc, X_train, X_test, y_train, y_test)
4
5
run['rfr_summary'] = npt_utils.create_regressor_summary(rfr, X_train, X_test, y_train, y_test)
6
7
run['kmeans_summary'] = npt_utils.create_kmeans_summary(km, X, n_clusters=17)
Copied!
You can log training metrics to Neptune using series fields. Each .log() will add a new value at the end of the series. You can use Neptune with any machine learning framework, but if you are using a framework that supports logging (most of them do!) you don't need to write yourself the logging code at all. Just add the Neptune integration and it will track all the training metrics.
Neptune integrates with 25+ tools and frameworks. Check out the full list of integrations.

Step 4: Log evaluation results

Evaluation metrics

1
run['evaluation/accuracy'] = eval_acc
2
run['evaluation/loss'] = eval_loss
Copied!
To log evaluation metrics simply assign them to a field of your choice. Using the snippet above, both evaluation metrics will be stored in the same evaluation namespace.

Evaluation charts

Matplotlib
From disc
1
import matplotlib.pyplot as plt
2
from scikitplot.metrics import plot_roc, plot_precision_recall
3
4
fig, ax = plt.subplots()
5
plot_roc(y_test, y_pred_proba, ax=ax)
6
7
run['evaluation/ROC'].upload(fig)
8
9
fig, ax = plt.subplots()
10
plot_precision_recall(y_test, y_pred_proba, ax=ax)
11
12
run['evaluation/precision-recall'].upload(fig)
Copied!
1
run['evaluation/ROC'].upload('roc.png')
2
run['evaluation/precision-recall'].upload('prec-recall.jpg')
Copied!
You can log plots and charts easily using the .upload() function. In the case of a plot object, it gets converted to an image file and uploaded, but you can also upload images from the local disc.

Sample predictions

Images
Tabular data
1
for image, predicted_label, probabilites in sambple_predictions:
2
3
description = '\n'.join(['class {}: {}'.format(label, prob) for label, prob in probabilites])
4
5
run['evaluation/predictions'].log(
6
image,
7
name=predicted_label,
8
description=description
9
)
Copied!
1
import pandas as pd
2
3
df = pd.DataFrame(
4
data={
5
'y_test': y_test,
6
'y_pred': y_pred,
7
'y_pred_probability': y_pred_proba.max(axis=1),
8
}
9
)
10
11
run['evaluation/predictions'].upload(File.as_html(df))
Copied!
The snippet above logs sample predictions by using FileSeries to log a series of labeled images. If you are working with tabular data you can upload Pandas DataFrame and inspect it as a neat table from the UI.

Step 5: Upload model file

From disk
As a pickle
1
torch.save(net.state_dict(), 'model.pt')
2
3
run['model/saved_model').upload('model.pt')
Copied!
1
run['model/pickled_model').upload(File.as_pickle(model_object))
Copied!
You can upload any binary file (e.g. model file) from disk using the .upload() method. If your model is saved as multiple files you can upload a whole folder as a FileSet using .upload_files().

Step 6: Run your script and explore your metadata in Neptune

That's all! By adding just those few lines you are tracking your hyperparameters, training metrics, trained model, and evaluation results. Now it's time to run your script and explore your metadata in Neptune.

What's next?

This how-to guide covered the most important Neptune functionalities, but you can log many different types of metadata - audio, video, interactive charts, dataset info, and many others. To learn more read What can you log and display?
Last modified 1mo ago