Skip to content

API reference: 🤗 Transformers integration#

You can log model-training metadata by passing a Neptune callback and to the Trainer callbacks.

Related


NeptuneCallback()#

Creates a Neptune callback that you pass to the callbacks argument of the Trainer constructor.

Parameters

Name         Type Default     Description
api_token str, optional None Neptune API token obtained upon registration. You can leave this argument out if you have saved your token to the NEPTUNE_API_TOKEN environment variable (strongly recommended).
project str, optional None Name of an existing Neptune project, in the form: "workspace-name/project-name". You can find and copy the name from the project settingsProperties in Neptune.

If None, the value of the NEPTUNE_PROJECT environment variable will be used.

If you just want to try logging anonymously, you can use the public project "common/huggingface-integration".

name str, optional None Custom name for the run.
base_namespace str, optional "finetuning" In the Neptune run, the root namespace (folder) that will contain all of the logged metadata.
run Run, optional None Pass a Neptune run object if you want to continue logging to an existing run.

See Logging to existing object and Passing object between files.

log_parameters bool, optional True Log all Trainer arguments and model parameters provided by the Trainer.
log_checkpoints str, optional None
  • If "same", uploads checkpoints whenever they are saved by the Trainer.
  • If "last", uploads only the most recently saved checkpoint.
  • If "best", uploads the best checkpoint (among the ones saved by the Trainer).
  • If None, does not upload checkpoints.
**neptune_run_kwargs (optional) - Additional keyword arguments to be passed directly to the init_run() method when a new run is created.

Example

from transformers.integrations import NeptuneCallback

# Create Neptune callback
neptune_callback = NeptuneCallback(
    name="DistilBERT",
    description="DistilBERT fine-tuned on GLUE/MRPC",
    tags=["args-callback", "fine-tune", "MRPC"],  # tags help you manage runs
    base_namespace="custom_name",  # the default is "finetuning"
    log_checkpoints="best",  # other options are "last", "same", and None
    capture_hardware_metrics=False,  # additional kwargs for a Neptune run
)

# Create training arguments
training_args = TrainingArguments(
    "quick-training-distilbert-mrpc",
    evaluation_strategy="steps",
    eval_steps=20,
    report_to="none",  # (1)
)

# Pass Neptune callback to Trainer
trainer = Trainer(
    model,
    training_args,
    callbacks=[neptune_callback],
)

trainer.train()
  1. To avoid creating several callbacks, set the report_to argument to "none". This will be the default behavior in version 5 of 🤗 Transformers.