Quickstart
Start by installing Neptune and configuring your Neptune API token and project, as described in Get started.
Then, you can use the following script to log some mocked metadata:
from random import random
from neptune_scale import Run
def hello_neptune():
run = Run(
api_token="eyJhcGlfYWRkcmVz...In0=", # not needed if using environment variable
project="team-alpha/project-x", # not needed if using environment variable
experiment_name="seabird-flying-skills",
)
run.log_configs(
{
"parameters/use_preprocessing": True,
"parameters/learning_rate": 0.002,
"parameters/batch_size": 64,
"parameters/optimizer": "Adam",
}
)
offset = random() / 5
for step in range(20):
acc = 1 - 2**-step - random() / (step + 1) - offset
loss = 2**-step + random() / (step + 1) + offset
run.log_metrics(
data={
"accuracy": acc,
"loss": loss,
},
step=step,
)
run.add_tags(["quickstart"])
print("\nOpen in Neptune web app:", run.get_experiment_url(), "\n")
run.close()
if __name__ == "__main__":
hello_neptune()
The line
if __name__ == "__main__":
ensures safe importing of the main module. For details, see the Python documentation.
To inspect the logged metadata in the web app:
- In the Neptune project, click on the run to explore all its metadata.
- If you log multiple runs, enable compare mode by toggling eye icons ().