Skip to content

API reference: PyTorch integration#

You can use the NeptuneLogger to capture model training metadata when working with PyTorch.

Related


NeptuneLogger#

Captures model training metadata and logs them to Neptune.

Parameters

Name         Type Default Description
run Run or Handler - (required) An existing run reference, as returned by neptune.init_run(), or a namespace handler.
base_namespace str, optional "training" Namespace under which all metadata logged by the Neptune logger will be stored.
model torch.nn.Module - (required) PyTorch model object to be tracked.
log_model_diagram bool, optional False Whether to save the model visualization. Requires torchviz to be installed.
log_gradients bool, optional False Whether to track the frobenius-order norm of the gradients.
log_parameters bool, optional False Whether to track the frobenius-order norm of the parameters.
log_freq int, optional 100 How often to log the parameters/gradients norm. Applicable only if log_parameters or log_gradients is set to True.

Examples#

Creating a Neptune run and callback#

Create a run:

import neptune

run = neptune.init_run()
If Neptune can't find your project name or API token

As a best practice, you should save your Neptune API token and project name as environment variables:

export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh3Kb8"
export NEPTUNE_PROJECT="ml-team/classification"

You can, however, also pass them as arguments when initializing Neptune:

run = neptune.init_run(
    api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh3Kb8",  # your token here
    project="ml-team/classification",  # your full project name here
)
  • API token: In the bottom-left corner, expand the user menu and select Get my API token.
  • Project name: in the top-right menu: Properties.

If you haven't registered, you can also log anonymously to a public project (make sure not to publish sensitive data through your code!):

run = neptune.init_run(
    api_token=neptune.ANONYMOUS_API_TOKEN,
    project="common/quickstarts",
)

Instantiate the Neptune callback:

from neptune.integrations.pytorch import NeptuneLogger

neptune_callback = NeptuneLogger(run=run, model=model)

Train your model:

for epoch in range(1, 4):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()

Additional options#

import neptune
from neptune.integrations.pytorch import NeptuneLogger

run = neptune.init_run(
    name="My PyTorch run",
    tags=["test", "pytorch", "fail_on_exception"],
    fail_on_exception=True,
)

neptune_callback = NeptuneLogger(
    run=run,
    model=model,
    base_namespace="test",
    log_model_diagram=True,
    log_gradients=True,
    log_parameters=True,
    log_freq=50,
)