Skip to content

Kedro integration guide: Compare results between nodes#

Comparing Kedro nodes in a Neptune dashboard

You can compare metrics, parameters, dataset versions, and other metadata from Kedro pipeline nodes.

This guide shows how to:

  • Log metadata from model evaluations that happen in multiple nodes in a Kedro pipeline.
  • Compare metrics in the runs table.
  • Compare ROC curves and precision–recall curves in the Neptune app.

See dashboard in Neptune  Code examples 

Before you start#

  • Sign up at neptune.ai/register.
  • Create a project for storing your metadata.

  • Have the Kedro–Neptune plugin configured and initialized according to the Setup guide.

  • Set up your Kedro scripts and create a few training runs, as shown in the Preparing the training runs section of the pipeline comparison guide.
    • If you're not interested in dataset versions, you can skip the step where datasets are defined.

Comparing nodes in a single pipeline execution#

Once you have some runs logged, you can compare nodes within a Kedro pipeline execution.

  1. To open a run, click on it in the runs table (or follow one of the Neptune links that appeared in your console output).
  2. To compare your models on accuracy, in All metadata, navigate to the kedro/nodes/evaluate_models/metrics namespace.
  3. To preview the plots, navigate to the kedro/nodes/evaluate_models/plots namespace.
  4. To combine the above in one view, create a custom dashboard.

    See example dashboard in Neptune 

Comparing nodes between multiple pipeline executions#

  1. Navigate to the runs table.
  2. Click Add column.
  3. Add the following fields as columns in the table:

    • parameters from the kedro/catalog/parameters/ namespace
    • metrics from the kedro/nodes/evaluate_models/metrics/ namespace

    Tip

    • To customize the name and color of a column, click the settings icon ().
    • To save your view for later, click Save view as new above the table.

    See table view in Neptune