Lightning integration guide#
Lightning is a lightweight PyTorch wrapper for high-performance AI research. With the Neptune integration, you can:
- Monitor model training live
- Log training, validation, and testing metrics and visualize them in the Neptune app
- Log hyperparameters
- Monitor hardware consumption
- Log performance charts and images
- Save model checkpoints
- Track training code and Git commit information
See in Neptune  Code examples 
Quickstart#
Tip
This section is for PyTorch Lightning users who are familiar with loggers, like CSV logger or TensorBoard logger.
NeptuneLogger
is part of the Lightning library. To start logging, create a Neptune logger and pass it to the trainer:
-
Create the logger:
from lightning import LightningModule, Trainer from lightning.pytorch.loggers.neptune import NeptuneLogger from neptune import ANONYMOUS_API_TOKEN neptune_logger = NeptuneLogger( api_key=ANONYMOUS_API_TOKEN, # (1)! project="common/pytorch-lightning-integration", # (2)! tags=["training", "resnet"], # optional )
-
The
api_token
argument is included to enable anonymous logging.Once you register, you should leave the token out of your script and instead save it as an environment variable.
-
Projects in the
common
workspace are public and can be used for testing. To log to your own workspace, pass the full name of your Neptune project:workspace-name/project-name
. For example,"ml-team/classification"
.To copy the name, click the menu in the top-right corner and select Edit project details.
There are further ways to customize the behavior of the logger. For details, see the Lightning API reference.
-
-
Pass the logger to the trainer:
-
Run the trainer:
The Neptune logger setup is complete and you can run your scripts without additional changes.
Your metadata will be logged in the Neptune project for further analysis, comparison, and collaboration.
Full walkthrough#
This guide walks you through connecting NeptuneLogger
to your machine learning scripts and analyzing some logged metadata.
Before you start#
- Sign up at neptune.ai/register.
- Create a project for storing your metadata.
-
Have Neptune and Lightning installed.
If you'd rather follow the guide without any setup, you can run the example in Colab .
Adding NeptuneLogger to the Lightning script#
Lightning has a unified way of logging metadata, by using loggers.
You can learn more about logger support in the Lightning docs .
To start logging, create a Neptune logger and pass it to the runner:
-
Create a
NeptuneLogger
instance:from lightning.pytorch.loggers.neptune import NeptuneLogger # Create NeptuneLogger instance neptune_logger = NeptuneLogger() # (1)!
-
If you haven't set up your credentials, you can log anonymously:
Changing the metadata folder name
By default, the metadata is logged under a namespace called
training
.To change the namespace, modify the
prefix
argument of the constructor:Once the Neptune logger is created, a link appears in the console output.
Click the link to open the run in Neptune. You'll see the metadata appear as it gets logged.
Sample output
https://app.neptune.ai/workspace-name/project-name/e/RUN-100/metadata
The general format is
https://app.neptune.ai/<workspace>/<project>
followed by the Neptune ID of the initialized object. -
-
Pass
neptune_logger
to the trainer:The Neptune logger is now ready to be used.
-
Pass your
LightningModule
andDataLoader
instances to thefit()
method of the trainer: -
Run your script:
If Neptune can't find your project name or API token
As a best practice, you should save your Neptune API token and project name as environment variables. However, you can also pass them as arguments when you're using a function that takes api_token
and project
as parameters:
api_token="Your Neptune API token here"
- Find and copy your API token by expanding your user menu and selecting Get my API token.
project="workspace-name/project-name""
- Find and copy your project name in the top-right menu: → Edit project details.
For example:
Analyzing the logged metadata in Neptune#
Your metadata will be logged in the given Neptune project for analysis, comparison, and collaboration.
To browse the metadata, follow the Neptune link in the console output.
You can also open the project and look for your run in the Runs section.
Viewing the metadata#
To view the metadata from your Lightning run:
- In the Run details view, select All metadata.
- Click training (or the name of your custom namespace, if you specified a different prefix when creating the logger).
Metrics are logged as nested dictionary-like structures defined in the LightningModule
. For instructions, see the Specifying the metrics structure section.
Charts#
In the Run details view, select Charts to display all the metrics at once.
Tip
Create a custom dashboard to display various types of metadata in one view.
More options#
You can configure the Neptune logger in various ways to address custom logging needs.
In the following sections, we describe some common use cases.
Related
For the full NeptuneLogger API reference, see the Lightning docs .
Specifying the metrics structure#
Metrics are logged as nested dictionary-like structures defined in the LightningModule
.
You can specify the structure with self.log("path/to/metric", value)
.
from lightning import LightningModule
class MNISTModel(LightningModule):
def training_step(self, batch, batch_idx):
loss = ...
self.log("train/batch/loss", loss)
acc = ...
self.log("train/batch/acc", acc)
def training_epoch_end(self, outputs):
loss = ...
acc = ...
self.log("train/epoch/loss", loss)
self.log("train/epoch/acc", acc)
Using the logger methods anywhere in LightningModule#
You can use the default logging methods with the Neptune logger:
self.log()
log_metrics()
log_hyperparams()
from lightning import LightningModule
class LitModel(LightningModule):
def training_step(self, batch, batch_idx):
# log metrics
acc = ...
self.log("train/loss", loss) # standard log method
As another example, the below code yields the following result in Neptune: Two series of values (acc
and loss
) logged under the namespace training/val
.
import pytorch_lightning as pl
class LitModel(pl.LightningModule):
def validation_epoch_end(self, outputs):
loss = ...
y_true = ...
y_pred = ...
acc = accuracy_score(y_true, y_pred)
self.log("val/loss", loss)
self.log("val/acc", acc)
Using Neptune logging methods in LightningModule#
To log custom metadata – such as images, CSV files, or interactive charts – you can access the Neptune run directly with the self.logger.experiment
attribute.
You can then use logging methods from the Neptune client library to track your metadata, such as append()
, assign()
(=
), and upload()
.
from neptune.types import File
class LitModel(LightningModule):
def any_lightning_module_function_or_hook(self):
# Log images, using the Neptune client library
img = ...
self.logger.experiment["train/misclassified_imgs"].append(File.as_image(img))
# Generic recipe, using the Neptune client library
metadata = ...
self.logger.experiment["your/metadata/structure"] = metadata # (1)!
- You can always define your own folder structure, depending on how you want to organize your metadata.
Logging after fitting or testing is finished#
You can use the created Neptune logger outside of the Trainer
context, which lets you log objects after the fitting or testing methods are finished.
This way, you're not restricted to the LightningModule
class – you can log from any method or class in your project code.
from pytorch_lightning import Trainer
from lightning.pytorch.loggers.neptune import NeptuneLogger
import neptune
# Create run
run = neptune.init_run()
# Create logger
neptune_logger = NeptuneLogger(run=run)
trainer = pl.Trainer(logger=neptune_logger)
model = ...
datamodule = ...
# Run fit and test
trainer.fit(model, datamodule=datamodule)
trainer.test(model, datamodule=datamodule)
Log additional metadata after fit and test:
# Log confusion matrix as image
from neptune.types import File
fig, ax = plt.subplots()
plot_confusion_matrix(y_true, y_pred, ax=ax)
neptune_logger.experiment["test/confusion_matrix"].upload(File.as_image(fig))
Generic recipe for logging additional metadata:
Passing any Neptune init parameter to the logger#
The Neptune logger accepts keyword arguments, which you can use to supply more detailed information about your run:
from lightning.pytorch.loggers.neptune import NeptuneLogger
neptune_logger = NeptuneLogger(
project="ml-team/classification",
name="lightning-run",
description="mlp quick run with pytorch-lightning",
tags=["nlp", "quick-run"],
)
For the full list of arguments, see API reference ≫ neptune.init_run()
.
Logging model-related metadata#
If you have ModelCheckpoint
configured, the Neptune logger automatically logs model checkpoints. Model weights are logged in the <prefix>/model/checkpoints
namespace of the Neptune run.
You can log the model summary, as generated by the ModelSummary
utility from Lightning.
The summary is logged in the <prefix>/model/summary
namespace of the Neptune run.
If you have ModelCheckpoint
configured, the Neptune logger automatically logs the best_model_path
and best_model_score
values.
They are logged in the <prefix>/model
namespace of the Neptune run.
Logging hyperparameters#
You can log hyperparameters by using the standard log_hyperparams()
method from the Lightning logger.
from lightning.pytorch.loggers.neptune import NeptuneLogger
PARAMS = ... # dict or argparse
neptune_logger.log_hyperparams(params=PARAMS)
You can display any logged parameters:
- In the runs table, by adding them as columns.
- In custom dashboards.
Learn more
- What you can log and display
- Add Neptune to your code
- Essential logging methods
- API reference ≫ Lightning integration
NeptuneLogger
reference in Lightning API docs- Lightning on GitHub
- Neptune's PyTorch Integration