Skip to content

Add Neptune to your code#

  1. In your code, import the Neptune client library:

    import neptune
    
  2. Depending on how you want to organize the metadata in the app, start one or more Neptune objects:

    For metadata relating to a single experiment:

    run = neptune.init_run() # (1)!
    
    1. We recommend saving your API token and project name as environment variables.

      If needed, you can pass them as arguments when initializing Neptune:

      neptune.init_run(
          project="workspace-name/project-name",
          api_token="YourNeptuneApiToken",
      )
      

    Log experiment tracking metadata:

    params = {
        "max_epochs": 10,
        "optimizer": "Adam",
        "dropout": 0.2,
    }
    run["parameters"] = params
    

    When you're done, stop the connection to sync the data:

    run.stop()
    

    View the results in the Runs section.

    Register the model on a high level:

    model = neptune.init_model(key="FOREST") # (1)!
    
    1. If the project key is CLS, creates model with ID CLS-FOREST.

    Log metadata common to all model versions:

    model["signature"].upload("model_signature.json")
    

    When you're done, stop the connection to sync the data:

    model.stop()
    

    See the results in the Models section.

    Capture model version specifics:

    model_version = neptune.init_model_version(model="CLS-FOREST") # (1)!
    
    1. Creates model version based on the model CLS-FOREST. Will have ID CLS-FOREST-1.

    Log metadata specific to a model version:

    model_version["model/binary"].upload("model.pt")
    

    When you're done, stop the connection to sync the data:

    model_version.stop()
    

    See the results in the Models section.

    For metadata common to the entire project, initialize the project as a Neptune object:

    project = neptune.init_project(project="ml-team/classification")
    

    Log project-level metadata:

    project["dataset/v0.1"].track_files("s3://datasets/images")
    

    When you're done, stop the connection to sync the data:

    project.stop()
    
  3. After executing the script, Neptune prints a link to the relevant section in the app.

Run example#

The following example shows on a high level how you can plug Neptune into a typical model training flow.

Start the tracking#

In your model training script, import Neptune and start a run:

import neptune

run = neptune.init_run(project="ml-team/classification") # (1)!
  1. You can also set the project name as an environment variable. For instructions, see Set the project name.

Log hyperparameters#

Define some hyperparameters to track for the experiment and log them to the run object:

parameters = {
    "dense_units": 128,
    "activation": "relu",
    "dropout": 0.23,
    "learning_rate": 0.15,
    "batch_size": 64,
    "n_epochs": 30,
}
run["model/parameters"] = parameters

You can update or add new entries later in the code:

# Add additional parameters
run["model/parameters/seed"] = RANDOM_SEED

# Update parameters. For example, after triggering early stopping
run["model/parameters/n_epochs"] = epoch

Log training metrics#

Track the training process by logging your training metrics. Use the append() method for a series of values, or one of our ready-made integrations:

for epoch in range(parameters["n_epochs"]):
    [...]  # My training loop

    run["train/epoch/loss"].append(loss)
    run["train/epoch/accuracy"].append(acc)
from neptune.integrations.tensorflow_keras import NeptuneCallback

model.fit(
    x_train,
    y_train,
    callbacks=[NeptuneCallback(run=run)],
)
from neptune.integrations.xgboost import NeptuneCallback

xgb.train(
    params=parameters,
    dtrain=dtrain,
    callbacks=[NeptuneCallback(run=run)],
)
from neptune.integrations.lightgbm import NeptuneCallback

gbm = lgb.train(
    parameters,
    lgb_train,
    callbacks=[NeptuneCallback(run=run)],
)
import neptune.integrations.sklearn as npt_utils

run["cls_summary"] = npt_utils.create_classifier_summary(
    gbc, X_train, X_test, y_train, y_test
)

run["rfr_summary"] = npt_utils.create_regressor_summary(
    rfr, X_train, X_test, y_train, y_test
)

run["kmeans_summary"] = npt_utils.create_kmeans_summary(
    km, X, n_clusters=17
)

You can use Neptune with any machine learning framework. If you use a framework that supports logging (most of them do) you don't need to write the logging code yourself. The Neptune integration takes care of tracking all the training metrics.

Related

Integrations

Log evaluation results#

Assign the metrics to a namespace and field of your choice:

run["evaluation/accuracy"] = eval_acc
run["evaluation/loss"] = eval_loss

Using the snippet above, both evaluation metrics will be stored in the same evaluation namespace.

You can log plots and charts with the upload() method.

A plot object is converted to an image file, but you can also upload images from the local disk.

import matplotlib.pyplot as plt
from scikitplot.metrics import plot_roc, plot_precision_recall

fig, ax = plt.subplots()
plot_roc(y_test, y_pred_proba, ax=ax)

run["evaluation/ROC"].upload(fig)

fig, ax = plt.subplots()
plot_precision_recall(y_test, y_pred_proba, ax=ax)

run["evaluation/precision-recall"].upload(fig)
run["evaluation/ROC"].upload("roc.png")
run["evaluation/precision-recall"].upload("prec-recall.jpg")

The following snippet logs sample predictions by using the FileSeries type to log a series of labeled images:

for image, predicted_label, probabilites in sambple_predictions:

    description = "\n".join(
        [f"class {label}: {prob}" for label, prob in probabilites]
    )

    run["evaluation/predictions"].append(
        image,
        name=predicted_label,
        description=description,
    )

You can upload tabular data as a pandas DataFrame and inspect it as a neat table in the app:

import pandas as pd

df = pd.DataFrame(
    data={
        "y_test": y_test,
        "y_pred": y_pred,
        "y_pred_probability": y_pred_proba.max(axis=1),
    }
)

run["evaluation/predictions"].upload(File.as_html(df))

You can also just upload data as CSV, which you can preview as an interactive table.

Upload relevant files#

You can upload any binary file (such as a model file) from disk using the upload() method.

If your model is saved as multiple files, you can upload a whole folder as a FileSet with upload_files().

torch.save(net.state_dict(), "model.pt")

run["model/saved_model"].upload("model.pt")
from neptune.types import File

run["model/pickled_model"].upload(File.as_pickle(model_object))

Instead of uploading entire files, you can track their metadata only.

run["dataset/train"].track_files("./datasets/train/images")

For details, see Track artifacts.

Tips

  • To organize the model metadata in the Models section, instead of just logging to a run object, you can create a model object and log the data there. For more, see Model registry overview.

Explore results#

Once you're done logging, end the run with the stop() method:

run.stop()

Next, run your script and follow the link to explore your metadata in Neptune.

Sample output

[neptune] [info ] Neptune initialized. Open in the app: https://app.neptune.ai/workspace/project/e/RUN-1

In the above example, the run ID is RUN-1.

Defining a custom init_run() function#

You can set up a custom run initialization function by wrapping neptune.init_run(). This way, you can automatically populate required fields and tags each time a run is created. It also ensures that the names of namespaces and fields are the same across all runs, making it easier to find and compare them.

This approach can be especially helpful when multiple people collaborate on the same project. It also frees up developers from having to add these fields to their code and remembering the field names.

Below is an example of a custom function and its usage in the model-training script.

Show init_run() parameters list

See in API reference: neptune.init_run()

Name      Type Default     Description
project str, optional None Name of a project in the form workspace-name/project-name. If None, the value of the NEPTUNE_PROJECT environment variable is used.
api_token str, optional None Your Neptune API token (or a service account's API token). If None, the value of the NEPTUNE_API_TOKEN environment variable is used.

To keep your token secure, avoid placing it in source code. Instead, save it as an environment variable.

with_id str, optional None The Neptune identifier of an existing run to resume, such as "CLS-11". The identifier is stored in the object's sys/id field. If omitted or None is passed, a new tracked run is created.
custom_run_id str, optional None A unique identifier that can be used to log metadata to a single run from multiple locations. Max length: 36 characters. If None and the NEPTUNE_CUSTOM_RUN_ID environment variable is set, Neptune will use that as the custom_run_id value. For details, see Set custom run ID.
mode str, optional async Connection mode in which the logging will work. Possible values are async, sync, offline, read-only, and debug.

If you leave it out, the value of the NEPTUNE_MODE environment variable is used. If that's not set, the default async is used.

name str, optional "Untitled" Custom name for the run. You can use it as a human-readable ID and add it as a column in the runs table (sys/name).
description str, optional "" Editable description of the run. You can add it as a column in the runs table (sys/description).
tags list, optional [] Must be a list of str which represent the tags for the run. You can edit them after run is created, either in the run information or runs table.
source_files list or str, optional None

List of source files to be uploaded. Must be list of str or a single str. Uploaded sources are displayed in the Source code section of the run.

If None is passed, the Python file from which the run was created will be uploaded. When resuming a run, no file will be uploaded by default. Pass an empty list ([]) to upload no files.

Unix style pathname pattern expansion is supported. For example, you can pass ".py" to upload all Python source files from the current directory. Paths of uploaded files are resolved relative to the calculated common root of all uploaded source files. For recursion lookup, use "**/.py" (for Python 3.5 and later). For details, see the glob library.

capture_stdout Boolean, optional True Whether to log the standard output stream. Is logged in the monitoring namespace.
capture_stderr Boolean, optional True Whether to log the standard error stream. Is logged in the monitoring namespace.
capture_hardware_metrics Boolean, optional True Whether to track hardware consumption (CPU, GPU, memory utilization). Logged in the monitoring namespace.
fail_on_exception Boolean, optional True If an uncaught exception occurs, whether to set run's Failed state to True.
monitoring_namespace str, optional "monitoring" Namespace inside which all monitoring logs will be stored.
flush_period float, optional 5 (seconds) In asynchronous (default) connection mode, how often Neptune should trigger disk flushing.
proxies dict, optional None Argument passed to HTTP calls made via the Requests library. For details on proxies, see the Requests documentation.
capture_traceback Boolean, optional True In case of an exception, whether to log the traceback of the run.
git_ref GitRef or Boolean None GitRef object containing information about the Git repository path.

If None, Neptune looks for a repository in the path of the script that is executed.

To specify a different location, set to GitRef(repository_path="path/to/repo").

To turn off Git tracking for the run, set to GitRef.DISABLED or False.

For examples, see Logging Git info.
dependencies str, optional None Tracks environment requirements. If you pass "infer" to this argument, Neptune logs dependencies installed in the current environment. You can also pass a path to your dependency file directly. If left empty, no dependency file is uploaded.
async_lag_callback NeptuneObjectCallback, optional None Custom callback function which is called if the lag between a queued operation and its synchronization with the server exceeds the duration defined by async_lag_threshold. The callback should take a Run object as the argument and can contain any custom code, such as calling stop() on the object.

Note: Instead of using this argument, you can use Neptune's default callback by setting the NEPTUNE_ENABLE_DEFAULT_ASYNC_LAG_CALLBACK environment variable to TRUE.

async_lag_threshold float, optional 1800.0 (seconds) Duration between the queueing and synchronization of an operation. If a lag callback (default callback enabled via environment variable or custom callback passed to the async_lag_callback argument) is enabled, the callback is called when this duration is exceeded.
async_no_progress_callback NeptuneObjectCallback, optional None Custom callback function which is called if there has been no synchronization progress whatsoever for the duration defined by async_no_progress_threshold. The callback should take a Run object as the argument and can contain any custom code, such as calling stop() on the object.

Note: Instead of using this argument, you can use Neptune's default callback by setting the NEPTUNE_ENABLE_DEFAULT_ASYNC_NO_PROGRESS_CALLBACK environment variable to TRUE.

async_no_progress_threshold float, optional 300.0 (seconds) For how long there has been no synchronization progress. If a no-progress callback (default callback enabled via environment variable or custom callback passed to the async_no_progress_callback argument) is enabled, the callback is called when this duration is exceeded.
Definition
from datetime import datetime

import neptune


def custom_init_run(
    objective: str = "baseline",
    fields: dict = None,
    tags: list = None,
    **kwargs,
) -> neptune.Run:
    """Creates a Neptune run and populates it with predefined fields and metadata.

    Parameters:
        objective: Objective of the experiment.
        fields: A dictionary with key-value pairs corresponding to
            run fields and their values.
        tags: Tags to be assigned to the Neptune run.
        **kwargs: Additional keyword arguments passed to `neptune.init_run()`.

    Returns:
        A Neptune run object. You can access it for logging of further metadata.
    """

    custom_name = f"{datetime.today().strftime('%Y%m%d')}-{objective}"

    run = neptune.init_run(
        name=custom_name, # (1)!
        tags=tags, # (2)!
        **kwargs,
    )

    # Define mandatory fields and assign them to the run
    fields.update(
        {"mandatory_field": "value"},
    )
    run["prepopulated_fields"] = fields

    return run
  1. Sets a custom name, which you can use as a human-friendly ID.

    To display it in the app, add sys/name as a column.

    You can also edit the name in the run information view ( menu → Show run information).

  2. Tags applied this way are stored in the sys/tags field and can later be modified in the app.

Usage in main script
# Create a new run with necessary fields already populated
custom_run = custom_init_run(
    objective="high_outliers",
    tags=["tag1", "tag2", "tag3"],
    fields={"sample_metric": 42, "sample_text": "lorem ipsum"},
)

# You can use "custom_run" as you would use a regular Neptune run object
custom_run["namespace/subnamespace/field"] = "some metadata"

The resulting run structure would be:

run root
|-- namespace
    |-- subnamespace
        |-- field (String): some metadata
|-- prepopulated_fields
    |-- mandatory_field (String): value
    |-- sample_metric (Int): 42
    |-- sample_text (String): lorem ipsum
|-- sys
    |-- name (String): 20240116-high_outliers
    |-- tags (StringSet): {tag1, tag2, tag3}

You can learn more about Neptune field types in the API reference: Field types and methods →

For more ideas, check out the following: