Working with skorch#
With the Neptune–skorch integration, you can log relevant metadata from your net
history to Neptune. These include:
- Model summary information
- Model configuration (such as learning rate and optimizer)
- Epoch durations
- Metrics per train and validation
event_lr
- Change points (when available)
See example in Neptune  Code examples 
Related
- API reference ≫ skorch
NeptuneLogger
in the skorch API reference- skorch on GitHub
Before you start#
- Set up Neptune. Instructions:
The integration is implemented as part of the logging module of the skorch framework, so you don't need to install anything else.
Usage#
To log skorch metrics with Neptune:
-
Create a Neptune run:
- If you haven't set up your credentials, you can log anonymously:
neptune.init_run(api_token=neptune.ANONYMOUS_API_TOKEN, project="common/skorch-integration")
- If you haven't set up your credentials, you can log anonymously:
-
Create a
NeptuneLogger
callback.Pass the
run
object as the first argument: -
Pass the callback to the classifier and fit the estimator to your data:
-
If you set
close_after_train
toFalse
, stop the run once you're finished:
More options#
For ways to customize the NeptuneLogger
callback, see API reference ≫ skorch integration.
Logging additional metrics after training#
You can log metrics after the training has finished.
from sklearn.metrics import roc_auc_score
y_pred = net.predict_proba(X)
auc = roc_auc_score(y, y_pred[:, 1])
neptune_logger.run["roc_auc_score"].append(auc)
Related
Logging performance charts#
You can additionally log charts, such as an ROC curve.
from scikitplot.metrics import plot_roc
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16, 12))
plot_roc(y, y_pred, ax=ax)
neptune_logger.run["roc_curve"].upload(File.as_html(fig))
Logging trained model#
You can log the net
object after training.
net.save_params(f_params="basic_model.pkl")
neptune_logger.run["basic_model"].upload("basic_model.pkl")
Warning
If you set close_after_train
to False
, stop the run once you're finished logging:
Uploading checkpoints#
You can upload checkpoint files to Neptune by passing the checkpoints directory to the callback arguments and manually uploading it to the Neptune run.
-
Pass the directory where the checkpoint files are saved to the callback arguments.
-
Upload that same directory to the Neptune run, in a namespace of your choice.
In this case, the checkpoints are uploaded to a namespace called "checkpoints", which is nested under "model" and "training". You can replace it with a structure of your own choosing.
-
If you set
close_after_train
toFalse
, stop the run once you're finished logging:
Related