Sync and Compare TensorBoard Runs

This example shows you how to sync your logs directory with Neptune, then compare runs, just like you would in TensorBoard.

Requirements

Create a simple training script with TensorBoard logging. This example uses TensorFlow version 1.x, however, neptune-tensorboard works well with both TensorFlow 1 and TensorFlow 2.

import random

import tensorflow as tf

PARAMS = {
    'epoch_nr': 5,
    'batch_size': 256,
    'lr': 0.1,
    'momentum': 0.4,
    'use_nesterov': True,
    'unit_nr': 256,
    'dropout': 0.0
}

RUN_NAME = 'run_{}'.format(random.getrandbits(64))
EXPERIMENT_LOG_DIR = 'logs/{}'.format(RUN_NAME)

mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(PARAMS['unit_nr'], activation=tf.nn.relu),
  tf.keras.layers.Dropout(PARAMS['dropout']),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

optimizer = tf.keras.optimizers.SGD(lr=PARAMS['lr'],
                                    momentum=PARAMS['momentum'],
                                    nesterov=PARAMS['use_nesterov'],)

model.compile(optimizer=optimizer,
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

tensorboard = tf.keras.callbacks.TensorBoard(log_dir=EXPERIMENT_LOG_DIR)
model.fit(x_train, y_train,
          epochs=PARAMS['epoch_nr'],
          batch_size=PARAMS['batch_size'],
          callbacks=[tensorboard])

Change parameters and run a few different experiments to see what works best.

python main.py

Sync TensorBoard logdir with Neptune

You can now sync your logs directory with Neptune:

neptune tensorboard /path/to/logs --project USER_NAME/PROJECT_NAME

and be able to organize and collaborate on your experiments.

organize TensorBoard experiments in Neptune

You can also compare runs just like in TensorBoard:

compare TensorBoard runs in Neptune