Skip to main content
App version: 3.20251006

Quickstart

Open in Colab

This quickstart guide shows how to:

  • log basic configs and metrics to a Neptune run
  • explore the run in the Neptune web app
  • compare multiple runs

Setup

Start by installing Neptune and configuring your Neptune API token and project. For details, see Get started.

Create a run

To create a run and log some mocked metadata, use the following script:

from random import random
from neptune_scale import Run
from uuid import uuid4


def hello_neptune():
run = Run(experiment_name=f"quickstart-{uuid4()}")

# log model configuration
run.log_configs(
{
"parameters/use_preprocessing": True,
"parameters/learning_rate": 0.002,
"parameters/batch_size": 64,
"parameters/optimizer": "Adam",
}
)

# log mocked training metrics
offset = random() / 5
for step in range(50):
acc = 1 - 2**-step - random() / (step + 1) - offset
loss = 2**-step + random() / (step + 1) + offset

run.log_metrics(
data={
"accuracy": acc,
"loss": loss,
},
step=step,
)

# add tags and close the run
run.add_tags(["quickstart"])
run.close()


if __name__ == "__main__":
hello_neptune()

The line if __name__ == "__main__": ensures safe importing of the main module. For details, see the Python documentation.

Explore the run

To inspect the logged metadata in the web app, follow the link to the run in the console output.

In the web app, click the run name to explore all its metadata.

Log multiple runs

To log more runs, execute the script multiple times.

To visualize the runs in the web app, use the eye icons in the runs table. For details, see Select runs to compare.

Neptune app with three runs selected for comparison.

Next steps

tip

Watch our 20-minute video walkthrough to see how teams train foundation models at scale with Neptune: from identifying promising experiments to launching long runs and debugging training issues.