Skip to content

API reference: MLflow integration#


create_neptune_tracking_uri()#

Sets up a Neptune tracking URI for logging MLflow experiment metadata.

This lets you direct your metadata to Neptune instead of MLflow, without needing to change your MLflow logging code.

The function wraps neptune.init_run(), so you have the same parameters available for customizing your run or limiting what is logged. The exceptions are with_id and custom_run_id, which will be ignored if provided (as calling this function always creates a new Neptune run).

Parameters

Name       Type Default     Description
project str, optional None Name of a project in the form workspace-name/project-name. If None, the value of the NEPTUNE_PROJECT environment variable is used.
api_token str, optional None User's API token. If None, the value of the NEPTUNE_API_TOKEN environment variable is used.

Set to neptune.ANONYMOUS_API_TOKEN to log metadata anonymously.

To keep your token secure, avoid placing it in source code. Instead, save it as an environment variable.

mode str, optional async Connection mode in which the tracking will work. Possible values are async, sync, offline, read-only, and debug.

If you leave it out, the value of the NEPTUNE_MODE environment variable is used. If that's not set, the default async is used.

name str, optional "Untitled" Custom name of the run. You can edit it in the Run information and add it as a column in the runs table (sys/name).
description str, optional "" Description of the run. You can edit it in the Run information and add it as a column in the runs table (sys/description).
tags list, optional None Must be a list of str which represents the tags for the run. You can edit them after run is created, either in the run information or runs table.
source_files list or str, optional None

List of source files to be uploaded. Must be list of str or a single str. Uploaded sources are displayed in the Source code section of the run.

If None is passed, the Python file from which the run was created will be uploaded. When resuming a run, no file will be uploaded by default. Pass an empty list ([]) to upload no files.

Unix style pathname pattern expansion is supported. For example, you can pass ".py" to upload all Python source files from the current directory. Paths of uploaded files are resolved relative to the calculated common root of all uploaded source files. For recursion lookup, use "**/.py" (for Python 3.5 and later). For details, see the glob library.

capture_stdout Boolean, optional True Whether to log the standard output stream. Is logged in the monitoring namespace.
capture_stderr Boolean, optional True Whether to log the standard error stream. Is logged in the monitoring namespace.
capture_hardware_metrics Boolean, optional True Whether to track hardware consumption (CPU, GPU, memory utilization). Logged in the monitoring namespace.
fail_on_exception Boolean, optional True If an uncaught exception occurs, whether to set run's Failed state to True.
monitoring_namespace str, optional "monitoring" Namespace inside which all monitoring logs will be stored.
flush_period float, optional 5 In asynchronous (default) connection mode, how often Neptune should trigger disk flushing (in seconds).
proxies dict, optional None Argument passed to HTTP calls made via the Requests library. For details on proxies, see the Requests documentation.
capture_traceback Boolean, optional True In case of an exception, whether to log the traceback of the run.
git_ref GitRef or Boolean None GitRef object containing information about the Git repository path.

If None, Neptune looks for a repository in the path of the script that is executed.

To specify a different location, set to GitRef(repository_path="path/to/repo").

To turn off Git tracking for the run, set to GitRef.DISABLED or False.

For examples, see Logging Git info.
dependencies str, optional None Tracks environment requirements. If you pass "infer" to this argument, Neptune logs dependencies installed in the current environment. You can also pass a path to your dependency file directly. If left empty, no dependency file is uploaded.
async_lag_callback NeptuneObjectCallback, optional None Custom callback which is called if the lag between a queued operation and its synchronization with the server exceeds the duration defined by async_lag_threshold. The callback should take a Run object as the argument and can contain any custom code, such as calling stop() on the object.

Note: Instead of using this argument, you can use Neptune's default callback by setting the NEPTUNE_ENABLE_DEFAULT_ASYNC_LAG_CALLBACK environment variable to TRUE.

async_lag_threshold float, optional 1800.0 In seconds, duration between the queueing and synchronization of an operation. If a lag callback (default callback enabled via environment variable or custom callback passed to the async_lag_callback argument) is enabled, the callback is called when this duration is exceeded.
async_no_progress_callback NeptuneObjectCallback, optional None Custom callback which is called if there has been no synchronization progress whatsoever for the duration defined by async_no_progress_threshold. The callback should take a Run object as the argument and can contain any custom code, such as calling stop() on the object.

Note: Instead of using this argument, you can use Neptune's default callback by setting the NEPTUNE_ENABLE_DEFAULT_ASYNC_NO_PROGRESS_CALLBACK environment variable to TRUE.

async_no_progress_threshold float, optional 300.0 In seconds, for how long there has been no synchronization progress whatsoever. If a no-progress callback (default callback enabled via environment variable or custom callback passed to the async_no_progress_callback argument) is enabled, the callback is called when this duration is exceeded.

Example#

from neptune_mlflow_plugin import create_neptune_tracking_uri

uri = create_neptune_tracking_uri()  # (1)!
mlflow.set_tracking_uri(uri)

with mlflow.start_run() as mlflow_run:
    ...
If Neptune can't find your project name or API token

As a best practice, you should save your Neptune API token and project name as environment variables:

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jv...Yh3Kb8"
export NEPTUNE_PROJECT="ml-team/classification"

Alternatively, you can pass the information when using a function that takes api_token and project as arguments:

run = neptune.init_run( # (1)!
    api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jv...Yh3Kb8",  # your token here
    project="ml-team/classification",  # your full project name here
)
  1. Also works for init_model(), init_model_version(), init_project(), and integrations that create Neptune runs underneath the hood, such as NeptuneLogger or NeptuneCallback.

  2. API token: In the bottom-left corner, expand the user menu and select Get my API token.

  3. Project name: You can copy the path from the project details ( Edit project details).

If you haven't registered, you can log anonymously to a public project:

api_token=neptune.ANONYMOUS_API_TOKEN
project="common/quickstarts"

Make sure not to publish sensitive data through your code!

Additional options#

You can pass the same keyword arguments as for neptune.init_run(). This way, you can enable more options for the run or limit what is logged.

uri = create_neptune_tracking_uri(
    name="my-custom-run-name",
    description="Description of this run",
    tags=["test", "mlflow"],
    source_files="**/.py",
    dependencies="infer",
)

neptune mlflow#

Exports existing MLflow metadata to Neptune.

Command syntax: neptune mlflow [--api-token] [--project] [--mlflow-tracking-uri] [--exclude-artifacts] [--max-artifact-size]

Options         Type Default Description
-p, --project str - Neptune project name. If None, the value of the NEPTUNE_PROJECT environment variable is used.
-a, --api-token str - Neptune API token. If None, the value of the NEPTUNE_API_TOKEN environment variable is used.
-u, --mlflow-tracking-uri str - Your MLflow tracking URI. If not provided, it is left to the MLflow client to resolve it. For more, see the MLflow documentation .
-e, --exclude-artifacts bool False Whether to skip uploading artifacts to Neptune.
-m, --max-artifact-size int 50 Maximum size of the artifact to be uploaded to Neptune (in MB). For directories, this will be treated as the max size of the entire directory.
--help - - Show help message and exit.

Examples

If you've set your Neptune credentials as environment variables, you can use the following command to export data with default settings:

neptune mlflow
How do I save my credentials as environment variables?

Set your Neptune API token and full project name to the NEPTUNE_API_TOKEN and NEPTUNE_PROJECT environment variables, respectively.

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
export NEPTUNE_PROJECT="ml-team/classification"
export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
export NEPTUNE_PROJECT="ml-team/classification"
setx NEPTUNE_API_TOKEN "h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
setx NEPTUNE_PROJECT "ml-team/classification"

You can also navigate to SettingsEdit the system environment variables and add the variables there.

%env NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
%env NEPTUNE_PROJECT="ml-team/classification"

To find your credentials:

  • API token: In the bottom-left corner of the Neptune app, expand your user menu and select Get your API token. If you need the token of a service account, go to the workspace or project settings and enter the Service accounts settings.
  • Project name: Your full project name has the form workspace-name/project-name. You can copy it from the project menu ( Edit project details).

If you're working in Google Colab, you can set your credentials with the os and getpass libraries:

import os
from getpass import getpass
os.environ["NEPTUNE_API_TOKEN"] = getpass("Enter your Neptune API token: ")
os.environ["NEPTUNE_PROJECT"] = "workspace-name/project-name"

Otherwise, you can pass the credentials as options:

neptune mlflow \
    --api-token I2YzgMz1h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvh0dHBzOi8aHR0Yh3Kb8MifQ== \
    --project ml-team/llm-project \

Using the command with more options:

neptune mlflow \
    --project ml-team/llm-project-2 \
    --mlflow-tracking-uri file:///tmp/my_tracking \
    --max-artifact-size 100 \
    /

neptune-notebooks incompatibility

Currently, the CLI component of the integration does not work together with the Neptune-Jupyter extension (neptune-notebooks).

Until a fix is released, if you have neptune-notebooks installed, you must uninstall it to be able to use the neptune mlflow command.

See also