How to add Neptune to your code
Adding Neptune to your code requires no major modifications to your existing code. We’ll go through this process step by step.
This guide covers the most important Neptune functionalities, but you can log many different types of metadata - such as audio, video, interactive charts, and dataset info.
To learn more, see What can you log and display?

Before you start

Step 1: Import Neptune and create a Run

import neptune.new as neptune
from neptune.new.types import File
run = neptune.init(
project="workspace-name/project-name",
)
This code imports the Neptune client library and creates a run in the project of your choice. You can now begin logging metadata to Neptune.
To find your project name, head to its Settings and select the Properties tab.

Step 2: Log hyperparameters

parameters = {
"dense_units": 128,
"activation": "relu",
"dropout": 0.23,
"learning_rate": 0.15,
"batch_size": 64,
"n_epochs": 30,
}
run["model/parameters"] = parameters
If you have parameters in the form of a dictionary, you can log them to Neptune in batch. It will create a field with the appropriate type for each dictionary entry.
You can update the hyperparameters or add new ones later in the code:
# Add additional parameters
run["model/parameters/seed"] = RANDOM_SEED
# Update parameters. For example, after triggering early stopping
run["model/parameters/n_epochs"] = epoch

Step 3: Log training metrics

Any framework
TF/Keras
XGBoost
LightGBM
scikit-learn
for epoch in range(parameters["n_epochs"]):
[...] # My training loop
run["train/epoch/loss"].log(loss)
run["train/epoch/accuracy"].log(acc)
from neptune.new.integrations.tensorflow_keras import NeptuneCallback
model.fit(
x_train,
y_train,
callbacks=[NeptuneCallback(run=run)],
)
from neptune.new.integrations.xgboost import NeptuneCallback
xgb.train(
params=parameters,
dtrain=dtrain,
callbacks=[NeptuneCallback(run=run)],
)
from neptune.new.integrations.lightgbm import NeptuneCallback
gbm = lgb.train(
parameters,
lgb_train,
callbacks=[NeptuneCallback(run=run)],
)
import neptune.new.integrations.sklearn as npt_utils
run["cls_summary"] = npt_utils.create_classifier_summary(
gbc, X_train, X_test, y_train, y_test
)
run["rfr_summary"] = npt_utils.create_regressor_summary(
rfr, X_train, X_test, y_train, y_test
)
run["kmeans_summary"] = npt_utils.create_kmeans_summary(km, X, n_clusters=17)
You can log training metrics to Neptune using series fields. Each log() call adds a new value to the series.
You can use Neptune with any machine learning framework, but if you are using a framework that supports logging (most of them do) you don't need to write the logging code yourself. Just add the Neptune integration and it will track all the training metrics.
Neptune integrates with 30+ tools and frameworks. Check out the full list of integrations.

Step 4: Log evaluation results

Evaluation metrics

run["evaluation/accuracy"] = eval_acc
run["evaluation/loss"] = eval_loss
To log evaluation metrics, assign them to a field of your choice. Using the snippet above, both evaluation metric fields will be stored in the same evaluation namespace.

Evaluation charts

Matplotlib
From disk
import matplotlib.pyplot as plt
from scikitplot.metrics import plot_roc, plot_precision_recall
fig, ax = plt.subplots()
plot_roc(y_test, y_pred_proba, ax=ax)
run["evaluation/ROC"].upload(fig)
fig, ax = plt.subplots()
plot_precision_recall(y_test, y_pred_proba, ax=ax)
run["evaluation/precision-recall"].upload(fig)
run["evaluation/ROC"].upload("roc.png")
run["evaluation/precision-recall"].upload("prec-recall.jpg")
You can log plots and charts using the upload() function. In the case of a plot object, it gets converted to an image file, but you can also upload images from the local disk.

Sample predictions

Images
Tabular data
for image, predicted_label, probabilites in sambple_predictions:
description = "\n".join(
[f"class {label}: {prob}" for label, prob in probabilites]
)
run["evaluation/predictions"].log(
image, name=predicted_label, description=description
)
import pandas as pd
df = pd.DataFrame(
data={
"y_test": y_test,
"y_pred": y_pred,
"y_pred_probability": y_pred_proba.max(axis=1),
}
)
run["evaluation/predictions"].upload(File.as_html(df))
The snippet above logs sample predictions by using FileSeries to log a series of labeled images. If you are working with tabular data, you can upload pandas DataFrame and inspect it as a neat table in the Neptune app.

Step 5: Upload model file

From disk
As a pickle
torch.save(net.state_dict(), "model.pt")
run["model/saved_model"].upload("model.pt")
run["model/pickled_model"].upload(File.as_pickle(model_object))
You can upload any binary file (such as a model file) from disk using the upload() method.
If your model is saved as multiple files, you can upload a whole folder as a FileSet using upload_files().

Step 6: Run your script and explore your metadata in Neptune

By adding the above lines of code, you're tracking your hyperparameters, training metrics, trained model, and evaluation results.
Now it's time to run your script and explore your metadata in Neptune.
If Neptune cannot find your API token, it means you have not stored it as an environment variable. We strongly recommend doing that,

What's next?

This guide covered the most important Neptune functionalities, but you can log many different types of metadata - such as audio, video, interactive charts, and dataset info.
To learn more, see What can you log and display?