Kedro integration guide: Compare results between nodes#
You can compare metrics, parameters, dataset versions, and other metadata from Kedro pipeline nodes.
This guide shows how to:
- Log metadata from model evaluations that happen in multiple nodes in a Kedro pipeline.
- Compare metrics in the runs table.
- Compare ROC curves and precision–recall curves in the Neptune app.
See dashboard in Neptune  Code examples 
Before you start#
- Sign up at neptune.ai/register.
-
Create a project for storing your metadata.
-
Have the Kedro–Neptune plugin configured and initialized according to the Setup guide.
- Set up your Kedro scripts and create a few training runs, as shown in the Preparing the training runs section of the pipeline comparison guide.
- If you're not interested in dataset versions, you can skip the step where datasets are defined.
Comparing nodes in a single pipeline execution#
Once you have some runs logged, you can compare nodes within a Kedro pipeline execution.
- To open a run, click on it in the runs table (or follow one of the Neptune links that appeared in your console output).
- To compare your models on accuracy, in All metadata, navigate to the
kedro/nodes/evaluate_models/metrics
namespace. - To preview the plots, navigate to the
kedro/nodes/evaluate_models/plots
namespace. -
To combine the above in one view, create a custom dashboard.
Comparing nodes between multiple pipeline executions#
- Navigate to the runs table.
- Click Add column.
-
Add the following fields as columns in the table:
- parameters from the
kedro/catalog/parameters/
namespace - metrics from the
kedro/nodes/evaluate_models/metrics/
namespace
Tip
- To customize the name and color of a column, click the settings icon ().
- To save your view for later, click Save view as new above the table.
- parameters from the