API reference: 🤗 Transformers integration#
You can log model training metadata by passing a Neptune callback and to the Trainer callbacks.
NeptuneCallback()
#
Creates a Neptune callback that you pass to the callbacks
argument of the Trainer
constructor.
Parameters
Name | Type | Default | Description |
---|---|---|---|
api_token |
str , optional |
None |
Neptune API token obtained upon registration. You can leave this argument out if you have saved your token to the NEPTUNE_API_TOKEN environment variable (strongly recommended). |
project |
str , optional |
None |
Name of an existing Neptune project, in the form: "workspace-name/project-name" . In Neptune, you can copy the name from the project settings () → Edit project details.If None, the value of the If you just want to try logging anonymously, you can use the public project "common/huggingface-integration". |
name |
str , optional |
None |
Custom name for the run. |
base_namespace |
str , optional |
"finetuning" |
In the Neptune run, the root namespace (folder) that will contain all of the logged metadata. |
run |
Run , optional |
None |
Pass a Neptune run object if you want to continue logging to an existing run. See Resume a run and Pass object between files. |
log_parameters |
bool , optional |
True |
Log all Trainer arguments and model parameters provided by the Trainer. |
log_checkpoints |
str , optional |
None |
|
**neptune_run_kwargs |
(optional) | - | Additional keyword arguments to be passed directly to the init_run() function when a new run is created. |
Example
from transformers.integrations import NeptuneCallback
# Create Neptune callback
neptune_callback = NeptuneCallback(
name="DistilBERT",
description="DistilBERT fine-tuned on GLUE/MRPC",
tags=["args-callback", "fine-tune", "MRPC"], # tags help you manage runs
base_namespace="custom_name", # the default is "finetuning"
log_checkpoints="best", # other options are "last", "same", and None
capture_hardware_metrics=False, # additional kwargs for a Neptune run
)
# Create training arguments
training_args = TrainingArguments(
"quick-training-distilbert-mrpc",
evaluation_strategy="steps",
eval_steps=20,
report_to="none", # (1)!
)
# Pass Neptune callback to Trainer
trainer = Trainer(
model,
training_args,
callbacks=[neptune_callback],
)
trainer.train()
- To avoid creating several callbacks, set the
report_to
argument to"none"
. This will be the default behavior in version 5 of 🤗 Transformers.
See also
NeptuneCallback
in the 🤗 Transformers API reference