Skip to content

Log model metadata#

Manage model metadata with experiments

We recommend using runs to manage model metadata. Runs come with extra features such as group tags, custom views, and dashboards, offering rich comparison and visualization support.

For details, see Logging model metadata with runs.

To log your model metadata:

  1. Initialize the run object:

    import neptune
    
    run = neptune.init_run()
    

    You can customize the behavior and tracking through init_run() arguments.

    For details, see Create a run.

  2. Upload the model signature and other data with the upload() method:

    run["model/signature"].upload("model_signature.json")
    run["data/sample"].upload("datasets/sample.csv")
    

    For details, see Files.

  3. Track dataset versions with the track_files() method:

    run["data/train"].track_files("data/train.csv")
    run["data/validation/dataset/v0.1"].track_files("s3://datasets/validation")
    
  4. To record such metadata as the model stage or version, use tags or group tags:

    run["sys/tags"].add(["v2.0.3", "latest"])
    run["sys/group_tags"].add(["production"])
    
  5. To stop the connection to Neptune and sync all data, call the stop() method and execute the script:

    run.stop()
    

To view the logged metadata in the Neptune app, click the link in the console output.

Log separately from training metadata#

You can track the model metadata separately from the training metadata by creating dedicated runs. Then, you can connect the runs that store the model metadata with related experiments. This approach makes sense if your model is trained, retrained, or evaluated across multiple runs.

To link the related runs, assign their IDs or URLs to selected fields of another run:

run["runs/training/id"] = "CLS-14"
run["runs/training/url"] = "https://app.neptune.ai/my-team/my-project/e/CLS-14"
run["runs/eval/id"] = "CLS-15"
run["runs/eval/url"] = "https://app.neptune.ai/my-team/my-project/e/CLS-15"