Skip to content

Working with skorch#

Open in Colab

Charts displaying validation metrics logged with skorch

With the Neptune–skorch integration, you can log relevant metadata from your net history to Neptune. These include:

  • Model summary information
  • Model configuration (such as learning rate and optimizer)
  • Epoch durations
  • Metrics per train and validation
  • event_lr
  • Change points (when available)

See example in Neptune  Code examples 


Before you start#

The integration is implemented as part of the logging module of the skorch framework, so you don't need to install anything else.


To log skorch metrics with Neptune:

  1. Create a Neptune run:

    import as neptune
    run = neptune.init_run()  # (1)!
    1. If you haven't set up your credentials, you can log anonymously: neptune.init_run(api_token=neptune.ANONYMOUS_API_TOKEN, project="common/skorch-integration")
  2. Create a NeptuneLogger callback.

    Pass the run object as the first argument:

    neptune_logger = NeptuneLogger(run, close_after_train=False)
  3. Pass the callback to the classifier and fit the estimator to your data:

    net = NeuralNetClassifier(
    ), y_train)
  4. If you set close_after_train to False, stop the run once you're finished:

More options#

For ways to customize the NeptuneLogger callback, see API referenceskorch integration.

Logging additional metrics after training#

You can log metrics after the training has finished.

from sklearn.metrics import roc_auc_score

y_pred = net.predict_proba(X)
auc = roc_auc_score(y, y_pred[:, 1])["roc_auc_score"].append(auc)


Logging metrics

Logging performance charts#

You can additionally log charts, such as an ROC curve.

from scikitplot.metrics import plot_roc
import matplotlib.pyplot as plt

fig, ax = plt.subplots(figsize=(16, 12))
plot_roc(y, y_pred, ax=ax)["roc_curve"].upload(File.as_html(fig))

Logging trained model#

You can log the net object after training.



If you set close_after_train to False, stop the run once you're finished logging:

Uploading checkpoints#

You can upload checkpoint files to Neptune by passing the checkpoints directory to the callback arguments and manually uploading it to the Neptune run.

  1. Pass the directory where the checkpoint files are saved to the callback arguments.

    checkpoint_dirname = "path/to/checkpoints/directory"  # replace with your own
    checkpoint = Checkpoint(dirname=checkpoint_dirname)
    net = NeuralNetClassifier(
        callbacks=[neptune_logger, checkpoint],
  2. Upload that same directory to the Neptune run, in a namespace of your choice.["training/model/checkpoints"].upload_files(checkpoint_dirname)

    In this case, the checkpoints are uploaded to a namespace called "checkpoints", which is nested under "model" and "training". You can replace it with a structure of your own choosing.

  3. If you set close_after_train to False, stop the run once you're finished logging: