Skip to content

Sacred integration guide#

Open in Colab

Custom dashboard displaying metadata logged with Sacred

Sacred is a tool to configure, organize, log, and reproduce computational experiments. With the Neptune-Sacred integration, you can log the following metadata automatically:

  • Hyperparameters
  • Metrics and losses
  • Training code and Git information
  • Dataset version
  • Model configuration

See example in Neptune  Code examples 

Before you start#

Installing the integration#

To use your preinstalled version of Neptune together with the integration:

pip
pip install -U neptune-sacred

To install both Neptune and the integration:

pip
pip install -U "neptune[sacred]"
Passing your Neptune credentials

Once you've registered and created a project, set your Neptune API token and full project name to the NEPTUNE_API_TOKEN and NEPTUNE_PROJECT environment variables, respectively.

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...6Lc"

To find your API token: In the bottom-left corner of the Neptune app, expand the user menu and select Get my API token.

export NEPTUNE_PROJECT="ml-team/classification"

Your full project name has the form workspace-name/project-name. You can copy it from the project settings: Click the menu in the top-right → Details & privacy.

On Windows, navigate to SettingsEdit the system environment variables, or enter the following in Command Prompt: setx SOME_NEPTUNE_VARIABLE 'some-value'


While it's not recommended especially for the API token, you can also pass your credentials in the code when initializing Neptune.

run = neptune.init_run(
    project="ml-team/classification",  # your full project name here
    api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh...3Kb8",  # your API token here
)

For more help, see Set Neptune credentials.

If you'd rather follow the guide without any setup, you can run the example in Colab .

Sacred logging example#

Track your metadata with Neptune by adding a NeptuneObserver to the observers of your Sacred experiment.

  1. Create a run:

    import neptune
    
    run = neptune.init_run() # (1)!
    
    1. If you haven't set up your credentials, you can log anonymously:

      run = neptune.init_run(
          api_token=neptune.ANONYMOUS_API_TOKEN,
          project="common/sacred-integration",
      )
      
  2. Create a Sacred experiment:

    from sacred import Experiment
    
    ex = Experiment("image_classification") # (1)!
    
    1. If you're in an interactive environment such as Jupyter Notebook, you need to add the argument interactive=True to the Experiment constructor.

      For details about this safeguard, see the Sacred documentation .

  3. Add a NeptuneObserver instance to the observers of the experiment and pass the created run:

    from neptune.integrations.sacred import NeptuneObserver
    
    ex.observers.append(NeptuneObserver(run=run))
    
  4. Define your @ex.config (hyperparameters and configuration) and @ex.main (training loop).

  5. To stop the connection to Neptune and sync all data, call the stop() method:

    run.stop()
    
  6. Run your experiment as you normally would.

    To open the run, click the Neptune link that appears in the console output.

    Example link: https://app.neptune.ai/o/common/org/sacred-integration/e/SAC-1341

Select the Charts section to view the model training metrics live, or create a custom dashboard.

More options#

Logging artifacts#

When you call sacred.Experiment.add_artifact() with a filename and optionally a name, this triggers an event in the NeptuneObserver to upload the file to Neptune.

ex.add_artifact(filename=f"./model.pth", name="model_weights")

The same applies to Sacred resources. For details, see the Sacred documentation .

Manually logging metadata#

If you have other types of metadata that are not covered in this guide, you can still log them using the Neptune client library.

When you initialize the run, you get a run object, to which you can assign different types of metadata in a structure of your own choosing.

import neptune

# Create a new Neptune run
run = neptune.init_run()

# Log metrics inside loops
for epoch in range(n_epochs):
    # Your training loop

    run["train/epoch/loss"].append(loss)  # Each append() call appends a value
    run["train/epoch/accuracy"].append(acc)

# Track artifact versions and metadata
run["train/images"].track_files("./datasets/images")

# Upload entire files
run["test/preds"].upload("path/to/test_preds.csv")

# Log text or other metadata, in a structure of your choosing
run["tokenizer"] = "regexp_tokenize"