Keras integration guide#
Keras is an API built on top of TensorFlow. The Neptune–Keras integration logs the following metadata automatically:
- Model summary
- Parameters of the optimizer used for training the model
- Parameters passed to
Model.fit
during the training - Current learning rate at every epoch
- Hardware consumption and stdout/stderr output during training
- Training code and Git information
You can also use set the log_model_diagram
option to True
to save the model visualization produced by the Keras functions model_to_dot()
and plot_model()
.
See example in Neptune  Code examples 
Before you start#
- Sign up at neptune.ai/register.
- Create a project for storing your metadata.
- Have Keras installed.
Installing the integration#
To use your pre-installed version of Neptune together with the integration:
To install both Neptune and the integration:
Passing your Neptune credentials
Once you've registered and created a project, set your Neptune API token and full project name to the NEPTUNE_API_TOKEN
and NEPTUNE_PROJECT
environment variables, respectively.
To find your API token: In the bottom-left corner of the Neptune app, expand the user menu and select Get my API token.
To find your project: Your full project name has the form workspace-name/project-name
. To copy the name, click the menu in the top-right corner and select Edit project details.
While it's not recommended especially for the API token, you can also pass your credentials in the code when initializing Neptune.
run = neptune.init_run(
project="ml-team/classification", # your full project name here
api_token="h0dHBzOi8aHR0cHM6Lkc78ghs74kl0jvYh...3Kb8", # your API token here
)
For more help, see Set Neptune credentials.
If you'd rather follow the guide without any setup, you can run the example in Colab .
Keras logging example#
This example shows how to use NeptuneCallback
to log metadata as you train your model with Keras.
-
Create a run:
-
If you haven't set up your credentials, you can log anonymously:
-
-
Initialize the Neptune callback and pass it to
model.fit()
:from neptune.integrations.tensorflow_keras import NeptuneCallback neptune_callback = NeptuneCallback(run=run) # (1)! model.fit( x_train, y_train, epochs=5, batch_size=64, callbacks=[neptune_callback], )
- You can customize the base namespace (folder) of your logged metadata here, by supplying the
base_namespace
argument and setting it to a name of your choice. The default is"training"
.
- You can customize the base namespace (folder) of your logged metadata here, by supplying the
-
Run your script as you normally would.
To open the run, click the Neptune link that appears in the console output.
Example link: https://app.neptune.ai/o/common/org/tf-keras-integration/e/TFK-18/metadata
-
Monitor your Keras training in Neptune.
In the Run details view, select Charts to watch the training metrics live.
Tip
You can monitor the hardware consumption in the Monitoring section.
-
To stop the connection to Neptune and sync all data, call the
stop()
method:
More options#
Enabling logging on batch#
You can set the callback to log metrics for each batch, in addition to each epoch, by tweaking the callback:
Saving the model visualization#
You can save the model visualization diagram by setting the log_model_diagram
parameter to True
.
Note
This option requires pydot to be installed.
import neptune
from neptune.integrations.tensorflow_keras import NeptuneCallback
run = neptune.init_run()
neptune_callback = NeptuneCallback(run=run, log_model_diagram=True)
Logging model weights#
You can log model weights to Neptune both during and after training.
- To have all the metadata in a single place, you can log model metadata to the same run you created earlier.
- To manage your model metadata separately, you can use the Neptune model registry.
Initialize a ModelVersion
object.
You first need to create a Model
object that functions as an umbrella for all the versions. You can create and manage each model version separately.
Create a specific version of that model by passing the model ID:
model_id = neptune_model["sys/id"].fetch()
model_version = neptune.init_model_version(model=model_id)
Log metadata to the model version, just like you would for runs:
import glob
model.save("my_model")
model_version["saved_model"].upload("my_model/saved_model.pb")
for name in glob.glob("variables/*"):
model_version[name].upload(name)
Result
The model metadata will now be displayed in the Models section of the project.
For more, see the Model registry overview.
Uploading checkpoints every epoch#
You can set up the ModelCheckpoint
to save a new checkpoint every epoch:
Then upload each checkpoint file during or after training:
Logging test sample images#
You can log sample images to Neptune with the append()
method, which will create a series of images.
from neptune.types import File
for image in x_test[:100]:
run["test/sample_images"].append(File.as_image(image))
Manually logging metadata#
If you have other types of metadata that are not covered in this guide, you can still log them using the Neptune client library.
When you initialize the run, you get a run
object, to which you can assign different types of metadata in a structure of your own choosing.
import neptune
from neptune.integrations.tensorflow_keras import NeptuneCallback
# Create a new Neptune run
run = neptune.init_run()
# Log metrics using NeptuneCallback
neptune_callback = NeptuneCallback(run=run)
model.fit(x_train, y_train, batch_size=32, callbacks=[neptune_callback])
# Use the same run to upload files
run["test/preds"].upload("path/to/test_preds.csv")
# Use the same run track and version artifacts
run["train/images"].track_files("./datasets/images")
# Use the same run to log numbers or text
run["tokenizer"] = "regexp_tokenize"
Related
- What you can log and display
- Resume a run
- Add Neptune to your code
- API reference ≫ Keras integration
- neptune-tensorflow-keras repo on GitHub
- Keras on GitHub