Skip to content

Log model metadata#

To manage and visualize model metadata, you can use tags, custom views, and dashboards.

To log your model metadata:

  1. Initialize a run object:

    import neptune
    
    run = neptune.init_run()
    

    You can customize the behavior and tracking through init_run() arguments.

    For details, see Create a run.

  2. Upload the model signature and other data with the upload() method:

    run["model/signature"].upload("model_signature.json")
    run["data/sample"].upload("datasets/sample.csv")
    

    For details, see Files.

  3. Track dataset versions with the track_files() method:

    run["data/train"].track_files("data/train.csv")
    run["data/validation/dataset/v0.1"].track_files("s3://datasets/validation")
    
  4. To record such metadata as the model stage or version, use tags or group tags:

    run["sys/tags"].add(["v2.0.3", "latest"])
    run["sys/group_tags"].add(["production"])
    
  5. To stop the connection to Neptune and sync all data, call the stop() method and execute the script:

    run.stop()
    

To view the logged metadata in the Neptune app, click the link in the console output.

See in Neptune  See example code on GitHub 

Log separately from training metadata#

You can track the model metadata separately from the training metadata by creating dedicated runs. Then, you can connect the runs that store the model metadata with related experiments. This approach makes sense if your model is trained, retrained, or evaluated across multiple runs.

To link the related runs, assign their IDs or URLs to selected fields of another run:

run["runs/training/id"] = "CLS-14"
run["runs/training/url"] = "https://app.neptune.ai/my-team/my-project/e/CLS-14"
run["runs/eval/id"] = "CLS-15"
run["runs/eval/url"] = "https://app.neptune.ai/my-team/my-project/e/CLS-15"