skorch integration guide#
With the Neptune-skorch integration, you can log relevant metadata from your net
history to Neptune. These include:
- Model summary information
- Model configuration (such as learning rate and optimizer)
- Epoch durations
- Metrics per train and validation
event_lr
- Change points (when available)
See example in Neptune  Code examples 
Before you start#
- Sign up at neptune.ai/register.
- Create a project for storing your metadata.
- Have skorch installed.
Installing the integration#
The integration is implemented as part of the logging module of the skorch framework, so you don't need to install anything else.
To install Neptune:
Passing your Neptune credentials
Once you've registered and created a project, set your Neptune API token and full project name to the NEPTUNE_API_TOKEN
and NEPTUNE_PROJECT
environment variables, respectively.
To find your API token: In the bottom-left corner of the Neptune app, expand the user menu and select Get my API token.
Your full project name has the form workspace-name/project-name
. You can copy it from the project settings: Click the
menu in the top-right →
Details & privacy.
On Windows, navigate to Settings → Edit the system environment variables, or enter the following in Command Prompt: setx SOME_NEPTUNE_VARIABLE 'some-value'
While it's not recommended especially for the API token, you can also pass your credentials in the code when initializing Neptune.
run = neptune.init_run(
project="ml-team/classification", # your full project name here
api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh...3Kb8", # your API token here
)
For more help, see Set Neptune credentials.
Usage#
To log skorch metrics with Neptune:
-
Create a Neptune run:
-
If you haven't set up your credentials, you can log anonymously:
-
-
Create a
NeptuneLogger
callback.Pass the
run
object as the first argument: -
Pass the callback to the classifier and fit the estimator to your data:
-
If you set
close_after_train
toFalse
, stop the run once you're finished:
More options#
For ways to customize the NeptuneLogger
callback, see the API reference.
Logging additional metrics after training#
You can log metrics after the training has finished.
from sklearn.metrics import roc_auc_score
y_pred = net.predict_proba(X)
auc = roc_auc_score(y, y_pred[:, 1])
neptune_logger.run["roc_auc_score"].append(auc)
Related
Logging performance charts#
You can additionally log charts, such as an ROC curve.
from scikitplot.metrics import plot_roc
import matplotlib.pyplot as plt
from neptune.types import File
fig, ax = plt.subplots(figsize=(16, 12))
plot_roc(y, y_pred, ax=ax)
neptune_logger.run["roc_curve"].upload(File.as_html(fig))
Related
Logging trained model#
You can log the net
object after training.
net.save_params(f_params="basic_model.pkl")
neptune_logger.run["basic_model"].upload("basic_model.pkl")
Warning
If you set close_after_train
to False
, stop the run once you're finished logging:
Uploading checkpoints#
You can upload checkpoint files to Neptune by passing the checkpoints directory to the callback arguments and manually uploading it to the Neptune run.
-
Pass the directory where the checkpoint files are saved to the callback arguments.
-
Upload that same directory to the Neptune run, in a namespace of your choice.
In this case, the checkpoints are uploaded to a namespace called "checkpoints", which is nested under "model" and "training". You can replace it with a structure of your own choosing.
-
If you set
close_after_train
toFalse
, stop the run once you're finished logging:
Related
- Upload files
- Create a run
- skorch integration API reference
NeptuneLogger
in the skorch API reference- skorch on GitHub