Skip to content

Neptune tutorial#

This tutorial walks you through setup and basic usage. You don't need any prior experience of experiment trackers or similar tools.

You'll learn how to:

  • Install and set up Neptune
  • Connect Neptune to your script and create a few runs
  • Explore the results in the app
  • Edit run details
  • Query and download metadata

By the end of it, you'll be all set to use Neptune in your workflow. As a bonus, you'll have a neptune_tutorial.py script and a tutorial project available as reference whenever you need.

Note: This tutorial concerns using the core Neptune client library to log metadata in a completely customized way. If you're interested in having Neptune automatically log typical metadata from popular ML frameworks, check out our integration guides.

Before you start#

You should reserve an hour for completing the tutorial.

Ready? Let's go!

Create a project#

Projects are how you organize your runs in Neptune.

For this tutorial, we'll create a new project called tutorial. However, if you've already created a project, you can also skip this step and use that.

  1. In your Neptune workspace, navigate to All projects.
  2. Click New project.
  3. In the Project name input box, enter "tutorial".
  4. Set the Project key to "TUT".

Creating a new project in Neptune

After the project is created, you can change all of the options except the key.

Install Neptune#

Now that we have our tutorial project set up, we're ready to install the Neptune client library.

Open a terminal and enter the following command:

pip install neptune
conda install -c conda-forge neptune
Installing through Anaconda Navigator

To find neptune, you may need to update your channels and index.

  1. In the Navigator, select Environments.
  2. In the package view, click Channels.
  3. Click Add..., enter conda-forge, and click Update channels.
  4. In the package view, click Update index... and wait until the update is complete. This can take several minutes.
  5. You should now be able to search for neptune.

Note: The displayed version may be outdated. The latest version of the package will be installed.

Note: On Bioconda, there is a "neptune" package available which is not the neptune.ai client library. Make sure to specify the "conda-forge" channel when installing neptune.ai.

Where to enter the command
  • Linux: Command line
  • macOS: Terminal app
  • Windows: PowerShell or Command Prompt
  • Jupyter Notebook: In a cell, prefixed with an exclamation mark: ! your-command-here

Set up authentication#

Your Neptune API token is like a password to the application. By saving your token as an environment variable, you avoid putting it in your source code, which is more convenient and secure.

To find and save your API token:

  1. In the bottom-left corner of the app, expand your user menu.
  2. Select Get Your API token.

    How to find your Neptune API token

  3. Depending on your system:

    From the API token dialog in Neptune, copy the export command and append the line to your .profile or other shell initialization file.

    Example line
    export NEPTUNE_API_TOKEN="uyVrZXkiOiIzNTd..."
    

    From the API token dialog in Neptune, copy the export command and append the line to your .profile or other shell initialization file.

    Example line
    export NEPTUNE_API_TOKEN="uyVrZXkiOiIzNTd..."
    
    1. From the API token dialog in Neptune, copy the setx command.

      Example line
      setx NEPTUNE_API_TOKEN "uyVrZXkiOiIzNTd..."
      
    2. Open a terminal app, such as PowerShell or Command Prompt.

    3. Paste the line you copied and press enter.
    4. To activate the change, restart the terminal app.

    You can also navigate to SettingsEdit the system environment variables and add the variable there.

    You can use the os library to set the token as an environment variable.

    Add the following to a notebook cell:

    import os
    from getpass import getpass
    os.environ["NEPTUNE_API_TOKEN"] = getpass("Enter your Neptune API token: ")
    

    From the API token dialog, copy your token, paste it in the input box, and hit Enter.

    Note that any environment variables declared this way won't persist after the notebook kernel shuts down. If you start a new kernel, they need to be set again.

Set the project name#

While we're at it, let's also save our project name as an environment variable.

Changing the project name later

You may want to change this to a different project later on. In that case, you have two options:

  • Simply assign a new project name to the environment variable.
  • Or, specify the project directly in the script that calls Neptune. We'll show you how to do this.

The full name of your Neptune project has the form workspace-name/project-name, so in this case it's going to be your-workspace-name/tutorial.

Tip

The name of the workspace is visible in the top-left corner of the app.

When in doubt, you can always find and copy the full name of a project from its settings:

  1. In your project view, click the menu in the top-right corner.
  2. Select Edit project details.
  3. Click the copy button () next to the project name.

Let's now save the project name to the NEPTUNE_PROJECT environment variable in your system.

On the command line, enter the following:

export NEPTUNE_PROJECT="WORKSPACE-NAME/tutorial"

In Terminal, enter the following:

export NEPTUNE_PROJECT="WORKSPACE-NAME/tutorial"

Open a terminal app, such as PowerShell or Command Prompt, and enter the following:

setx NEPTUNE_PROJECT "WORKSPACE-NAME/tutorial"

There are also other environment variables you can set to optimize your workflow. For details, see Environment variables.

We're all set. Let's track some runs!

Add Neptune to your code#

Now that we have a place to send the metadata, all we need to do is import Neptune in our code, initialize a run, and start logging.

  1. Create a Python script called neptune_tutorial.py.
  2. Copy the code below and paste it into the script file.
neptune_tutorial.py
import neptune

run = neptune.init_run()

In the code above, we import the Neptune client library and initialize a run object, which we use to track an experiment. The run automatically logs some system information and hardware consumption for us.

Next, we'll log a few different types of metadata to the run.

Didn't save your credentials as environment variables?

It's all good. We recommend using environment variables especially for API tokens, but you can also pass your token and project as arguments to any Neptune init function:

run = neptune.init_run(
    project="workspace-name/project-name",
    api_token="Your Neptune API token",
)

Replace workspace-name/project-name with your actual project name, and Your Neptune API token with your token. The init_run() call might look something like this:

run = neptune.init_run(
    project="jackie/tutorial",
    api_token="hcHAubmVwdHVuZS5haSIsImFwaV9rZXkiOiI2YTk0N...2MifQ==",
)

Log hyperparameters#

Now that our run is active, we can start logging metadata. It'll be periodically synchronized with the Neptune servers in the background.

There are two aspects to logging metadata:

  1. Where? Define the location inside the run where the metadata should go: run["namespace/nested_namespace/field"]
  2. What? Assign metadata to the location using "=" or some appropriate logging method.

To start, let's log a simple text entry:

run["algorithm"] = "ConvNet"
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

In the metadata structure of our run, this creates a field named algorithm and assigns the string "ConvNet" to it. In a bit, we'll see how this looks in the app.

Next, let's define some hyperparameters in the form of a dictionary and assign those to the run object in batch.

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

This creates a field of the appropriate type for each dictionary entry, such as a string for activation and float for dropout. Because of how we specified the structure (model/parameters), the parameter fields will be stored in the parameters namespace, nested under the model namespace.

Learn more

Changing values or adding new ones later is perfectly possible. Let's change the activation from sigmoid to ReLU and add a new field for the batch size:

Changing an existing value
run["model/parameters/activation"] = "ReLU"
Adding a new field
run["model/parameters/batch_size"] = 64

Log metrics#

Next, we'll log some mocked training and evaluation metrics. Since we're essentially logging a series of values, we need the append() method.

Each append() call adds a new value to the series, so it's designed to be used inside a loop.

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

run["model/parameters/activation"] = "ReLU"
run["model/parameters/batch_size"] = 32

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)

This creates the namespaces train and eval, each with a loss and acc field.

We'll see these visualized as charts in the app later.

Tip for plotting libraries

Maybe you're generating metrics with some visualization library, such as Matplotlib? You can also upload figures as files:

import matplotlib.pyplot as plt
plt.plot(data)
run["dataset/distribution"].upload(plt.gcf())

or pass a figure object to the append() method, to create a FileSeries:

run["train/distribution"].append(plt_histogram)

To learn more, see What you can log and display: Images

Track a data file#

It might be important to track the version of a dataset and other files used in a given model training run. With the track_files() method, we can log metadata about any dataset, model, or other artifact stored in a file.

In the folder where your neptune_tutorial.py script is located, create a sample CSV file named sample.csv with the following contents:

sample.csv
sepal_length,sepal_width,petal_length,petal_width,variety
5.1,3.5,1.4,0.2,setosa
4.9,3.0,1.4,0.2,setosa
7,3.2.0,4.7,1.4,versicolor
6.4,3.2,4.5,1.5,versicolor
6.3,3.3,6.0,2.5,virginica
5.8,2.7,5.1,1.8,virginica

Then add tracking to the run:

run["data_versions/train"].track_files("sample.csv")
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

run["model/parameters/activation"] = "ReLU"
run["model/parameters/batch_size"] = 32

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)

run["data_versions/train"].track_files("sample.csv")

This records metadata about the CSV file in the data_versions/train field, which helps us identify the exact version of the file that was used for this run.

Learn more

Using NeptuneTracking artifacts

Upload a file#

The upload() method is useful when you need to log the contents of files, such as figures, images, or data samples.

Since we have our CSV file handy, let's just upload that:

run["data_sample"].upload("sample.csv")
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

run["model/parameters/activation"] = "ReLU"
run["model/parameters/batch_size"] = 32

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)

run["data_versions/train"].track_files("sample.csv")
run["data_sample"].upload("sample.csv")

The difference to artifact tracking is that this file is uploaded in full, so you want to take file size and storage space into consideration.

Rule of thumb

  • Want to track or version potentially large files that you store elsewhere? → track_files()Artifact field
  • Want to preview and interact with the file in Neptune? → upload() or upload_files()File field

Stop the run#

To round things off nicely, let's log a summary score.

run["f1_score"] = 0.95
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

run["model/parameters/activation"] = "ReLU"
run["model/parameters/batch_size"] = 32

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)

run["data_versions/train"].track_files("sample.csv")
run["data_sample"].upload("sample.csv")

run["f1_score"] = 0.95

Now that we're finished logging, we call the stop() method on the run.

Add the below to the end of the script:

run.stop()
import neptune

run = neptune.init_run()

run["algorithm"] = "ConvNet"

params = {
    "activation": "sigmoid",
    "dropout": 0.25,
    "learning_rate": 0.1,
    "n_epochs": 100,
}
run["model/parameters"] = params

run["model/parameters/activation"] = "ReLU"
run["model/parameters/batch_size"] = 32

for epoch in range(params["n_epochs"]):
    # this would normally be your training loop
    run["train/loss"].append(0.99**epoch)
    run["train/acc"].append(1.01**epoch)
    run["eval/loss"].append(0.98**epoch)
    run["eval/acc"].append(1.02**epoch)

run["data_versions/train"].track_files("sample.csv")
run["data_sample"].upload("sample.csv")

run["f1_score"] = 0.95
run.stop()

By default, Neptune tracks some system information and hardware consumption in the background. By calling stop(), you ensure that the connection to Neptune is open no longer than needed. This is especially important if you'll be plugging Neptune into continuous training flows or tracking multiple runs at once.

It's also important to stop the run in interactive environments, such as Jupyter Notebook – otherwise, the connection will remain open until you close the notebook completely.

Run your script#

Next, execute your code.

In a terminal, where the script is located, enter the following:

python neptune_tutorial.py

Keep an eye on the console output. In case something goes wrong, the error message can help you fix the issue on the spot.

If Neptune can't find your project name or API token

Chances are you set the environment variables correctly in some session, but you're running the code in a different environment.

We really recommend that you set those up at some point, but no worries – for now, you can pass your credentials directly to the init_run() function at the start:

run = neptune.init_run(
    project="your-workspace-name/tutorial",   #  replace with your own
    api_token="hcHAubmVwdHVuZS5haSIsImFwaiOiYTk0N...2MifQ==",  # replace with your own
)

If everything went well

Click the link in the console output to open the run in Neptune. It looks like this: https://app.neptune.ai/workspace-name/tutorial/e/TUT-1/

Now we get to the fun part: exploring the results in Neptune!

Explore the run in Neptune#

The nuts and bolts is of course in comparing multiple runs, but let's take a moment to explore the metadata within a single run.

Browse all metadata#

Viewing all metadata of a run

As you can see, our custom fields and namespaces are there, along with previews of their values:

  • Fields: algorithm, data_sample, and f1_score.
  • Namespaces: data_versions, model, eval, and train.

Click a namespace to access its contents.

You also see a few namespaces that we didn't log: monitoring, source_code, and sys. These are logged automatically, to help you track a bunch of system and other background information. You can turn these options off, if you like (see What Neptune logs automatically).

  • monitoring: hardware consumption and console logs, such as stderr and stdout streams.
  • source_code: the code used to execute the script, as well as Git and environment information.

    Logging in a Jupyter notebook?
    • If you're in Colab or other cloud environments, your code can't be snapshotted from there directly. Sorry!
    • If you're running the notebook on your local machine, just pass an extra argument to the function we used to initialize the run: init_run(source_files["neptune_tutorial.ipynb"])
  • sys: basic metadata about the run, such as the run object size, its unique Neptune ID, some system information, and a bunch of timestamps.

At the root of the run, check out the preview of our sample.csv file:

Previewing CSV data as an interactive table

You can switch between the interactive preview and the Raw data tab, which shows the data as you logged it.

View the pre-built dashboards#

We didn't log any images, but all the other sections should have something for us. It's like All metadata, but categorized and with fancier display options.

Viewing metrics displayed as charts

  • Charts: Logged training and eval metrics.
  • Monitoring: Hardware consumption metrics.
  • Source code: Snapshot of the entrypoint script as well as Git information, if you have Git initialized in your project folder.

    You can also see your project requirements here, if you pass the dependencies argument to neptune.init_run().

  • Artifacts: The metadata of our sample dataset file.

Previewing artifact metadata in Neptune

Create a custom dashboard#

Let's compose a chart that displays our training metrics.

  1. In the Run details view, click New dashboard.
  2. Enter the title "Metrics".
  3. Click Add widget and select the Chart widget.
  4. Enter the title "Training".
  5. Start typing "train" in the field selection box to find our train/loss and train/acc fields. Include both of those and click Add widget.

Creating a widget with two metrics

Drag the arrow icons at the edges of the widget to resize the chart to your liking. The widget should look something like the below.

Custom widget with two metrics

We'll create a more elaborate dashboard later.

Next, let's log a few more runs and get to the comparison part!

Log a few more runs#

In order to get some differences, back in our neptune_tutorial.py script, go ahead and tweak some of the parameter values or other metadata between the runs.

  • For example, change the dropout from 0.25 to 0.23, the batch size from 32 to 64, and so on.
  • Also make at least one modification to the contents of the sample CSV file.

Execute the script a few more times, so you have at least a handful of runs logged in total.

Compare the runs#

Next, let's meet up in the runs table. You should see your runs listed there.

Create a custom view#

In the table, the columns are arranged in three groups: Pinned columns on the left (), regular columns, and suggested columns.

Click the plus icon () on some suggested columns to add them to the table. This lets you sort the table by that column, as well as configure the name, color, and numerical format.

Pinned columns remain visible as you switch between different tabs. Their data is also shown in the legend when you hover over graphs in charts.

Once you're satisfied with your modifications, click Save view as new above the table.

Saving a custom view of the runs table

Enter a name, like "Tutorial view". Now anyone in the project can access this view from the drop-down list.

Select runs for comparison with the eye icon#

In the leftmost column of the table, the eye icon () determines which runs are used for the comparison views. All runs are hidden by default.

  1. Click the top eye icon on the header row and select All on this page.
  2. Select Compare runs to enter the comparison view.
  3. Let's again explore the available dashboards:
    • Charts: See training metrics visualized as charts.
    • Parallel coordinates: Explore parallel plot visualization with HiPlot.
    • Side-by-side: Dig deeper into differences between selected runs.
    • Artifacts: Contrast versions of our data sample between two specific runs.

      At the top, select the source and target runs to compare.

Pin columns to show them in the legend#

Pinned columns are your way to specify which fields are important for your run analysis.

Hover over any chart to bring up the legend box, which has a breakdown of run data at each step.

As you can see, by default the box includes the run ID and creation time. You'd probably rather see model training metadata there. For that, we just need to change which columns are pinned:

  • In the runs table, click the icon on the column and select Pin column.
  • Or, drag and drop columns into the area on the left, separated by the thicker line.

In the below example, we've pinned the fields for dropout and learning rate.

How pinned columns affect the legend in charts

Custom comparison options#

You can create custom comparison dashboards from certain widgets.

Because we created a custom metrics dashboard in the single run view, that dashboard is now automatically available for comparing multiple runs as well.

Edit run metadata#

By now, we've gone through some essential ways to drill into our run metadata in Neptune. But the web app isn't just a display window: We can modify run details, add descriptive tags, even stop active runs.

Perhaps some of the runs stand out from the rest. To highlight or categorize them, we can apply tags. Tags are stored in the system namespace (sys/tags), which also makes them accessible programmatically. That's a fancy way of saying we can query runs by tag via API.

Let's find the run with the highest f1 score and mark that our best run.

  1. Add the suggested f1_score column to the table.
  2. Click the icon on the column and select Sort descending.
  3. In the Tags column, click the cell of the top run, type in "best", and hit enter.

To also add a description, in the Run details view:

  1. Next to the run ID, click the menu () and select Show run information.
  2. The metadata you see here is stored in the system namespace ("sys") of the run object.
    • The ID stands for Neptune identifier. We'll need this later when we access the run programmatically, to resume logging or fetch data from it.
  3. Enter a name and description for the run.
  4. Save your changes.

Run information view in the Neptune app

Note that the description counts as any other field of the run. We could display it in your custom dashboard by adding the sys/description field.

Create a more complex dashboard#

To compose the ultimate training metrics analysis dashboard, let's add another custom one!

  1. Click New dashboard and name it something appropriate.
  2. In the top input box, start typing "f1" to find the f1_score field.
  3. Click Add widget for the single value widget and title it "F1 score".
  4. Set the text style to your liking. You can also edit these options later.
  5. Keep adding more widgets, such as:
    • The data sample as an interactive table
    • The parameters table
    • Metrics, such as all four metrics in one graph
    • Individual values, such as running time or other system information

The result might look something like this:

A custom dashboard consisting of several widgets of different types

Save the dashboard, and it'll be available for all runs in your project.

Query and download run metadata#

Download from the app#

Quick note before we get into querying metadata through the API: you can generally export files and data as CSV or PNG through the app.

In the visualization and comparison views, look for the download icon () to download charts, images, source code, or uploaded files.

For example, you could export the chart with the combined metrics that we just created as an image.

Resume a run#

Finally, let's access some of our logged metadata programmatically.

We resume an existing run by supplying its Neptune ID when initializing the run. Each Neptune run has an ID, composed of the project key and a counter.

Where the Neptune ID is displayed in the app

Inside the run structure, the ID is stored in the sys/id field.

In a separate Python instance, initialize a run in read-only mode:

>>> import neptune
>>> run = neptune.init_run(
...     with_id="TUT-4", # (1)!
...     mode="read-only",
... )
[neptune] [info   ] Neptune initialized. Open in the app: https://app.neptune.ai/jackie/tutorial/e/TUT-4/
...
  1. Replace with your own as needed.
If Neptune can't find your project name or API token

As a best practice, you should save your Neptune API token and project name as environment variables:

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jv...Yh3Kb8"
export NEPTUNE_PROJECT="ml-team/classification"

Alternatively, you can pass the information when using a function that takes api_token and project as arguments:

run = neptune.init_run( # (1)!
    api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jv...Yh3Kb8",  # your token here
    project="ml-team/classification",  # your full project name here
)
  1. Also works for init_model(), init_model_version(), init_project(), and integrations that create Neptune runs underneath the hood, such as NeptuneLogger or NeptuneCallback.

  2. API token: In the bottom-left corner, expand the user menu and select Get my API token.

  3. Project name: You can copy the path from the project details ( Edit project details).

If you haven't registered, you can log anonymously to a public project:

api_token=neptune.ANONYMOUS_API_TOKEN
project="common/quickstarts"

Make sure not to publish sensitive data through your code!

We're resuming the run in read-only mode because we're not modifying it. This mode ensures that we can fetch the metadata without adding or changing anything.

Learn more

You can choose between several modes for how the tracking should work. For more, see Connection modes.

Fetch metadata from the run#

Now that a connection is open to the run, we can query metadata from Neptune. The methods we can use depend on the type of data and how we logged it in the first place.

You can generally fetch a value from a field with the fetch() method. For example, to verify the owner of the run:

>>> username = run["sys/owner"].fetch()
>>> print(username)
jackie

Tags are stored as a collection of strings, so when you query with fetch(), Neptune returns a set of strings. We can check for the presence of a certain tag like this:

>>> run_tags = run["sys/tags"].fetch()
>>> if "best" in run_tags:
...     print("This run was the best!")
... 
This run was the best!

Since we logged the trainings metrics as series, we can use the fetch_last() method to access the last value.

>>> final_loss = run["train/loss"].fetch_last()
>>> print(final_loss)
0.15250000000000002

Next, let's obtain the MD5 hash value of the data sample artifact.

>>> hash = run["data_versions/train"].fetch_hash()    
>>> print(hash)
043d9048b7836754ca7f9712f62133dcc932988bb386420924540f92f0b97b3c

Tip

We have a whole tutorial on how to version datasets with artifact tracking: Data versioning

Since we also uploaded the sample dataset as a file, we can download the file with the download() method. This time we query the data from the data_sample file field.

Let's say we want the download to go into a datasets/ subfolder. We can specify this with the destination parameter:

>>> run["data_sample"].download(destination="./datasets")

We're done with our queries, so let's stop the run:

>>> run.stop()

Download all runs as pandas DataFrame#

Finally, in case we want to process our runs as a whole, let's fetch all the metadata of our runs. The fetch_runs_table() method lets us download the runs table as a pandas DataFrame.

In this case, instead of starting a run, we initialize our whole project as a Neptune object that we can operate on.

>>> project = neptune.init_project(
...     project="YOUR-WORKSPACE/tutorial",
...     mode="read-only",
... )
[neptune] [info   ] Neptune initialized. Open in the app: https://app.neptune.ai/jackie/tutorial/
...
>>> runs_table_df = project.fetch_runs_table().to_pandas()
>>> runs_table_df.head()
                 sys/creation_time  ...           monitoring/traceback
0 2022-05-18 12:51:41.416000+00:00  ...                            NaN
1 2022-05-18 12:51:15.122000+00:00  ...                            NaN
2 2022-05-18 12:51:01.679000+00:00  ...                            NaN
3 2022-05-18 07:08:39.247000+00:00  ...                            NaN
4 2022-05-18 07:07:11.588000+00:00  ...                            NaN

[5 rows x 35 columns]

The runs_table_df object is a pandas DataFrame where each row represents a run and each column represents a field.

Specifying table columns to return

You can specify which fields to include in the returned table with the columns argument.

Example
# Fetch list of all runs, including only the "f1_score" and "sys/running_time"
# fields as columns
>>> filtered_runs_table = project.fetch_runs_table(
...     columns=["f1_score", "sys/running_time"],
...     sort_by="f1_score",
... )
>>> filtered_runs_df = filtered_runs_table.to_pandas()
>>> print(filtered_runs_df)
    sys/id  sys/running_time    f1_score
0    TUT-5             6.436        0.95
1    TUT-4             7.342        0.92
2    TUT-3             8.538        0.87
3    TUT-2             9.560        0.91

Tips and next steps#

 Nice job! You're now a certified Neptuner.

If you need specific tips and explanations for the things you can do with Neptune, Using Neptune is a good place to start.

Next tutorials:

Other recommended resources: