Field types reference#
A field is the location of a piece of metadata in a Neptune object.
When you log metadata, the data type and logging method together determine the resulting field type. The type determines the operations available for the field.
Metadata type | Example | Logging method | Resulting field type |
---|---|---|---|
Single value | Parameters, final scores, text, time | = /assign() |
Float , Integer , Boolean , String , Datetime |
Series of values | Metrics, loss, accuracy | append() /extend() |
FloatSeries |
Series of text | Journal, notes | append() /extend() |
StringSeries |
Series of files | Image series, predictions | append() /extend() |
FileSeries |
Single file | Image, plot file, data sample | upload() |
File |
Set of files | Large number of files | upload_files() |
FileSet |
Tags | Text tags to annotate runs or assign them to groups | add() |
StringSet |
Files to be versioned | Dataset, model file | track_files() |
Artifact |
neptune.types
overview#
Neptune field types can be divided into the following categories:
Type | Description |
---|---|
Float , Integer , Boolean , String , Datetime , File , GitRef , and RunState |
Used for a single value of the given type, or a single file. |
Series |
Used for series of values or files (for example, images). Available types are FloatSeries , StringSeries and FileSeries . |
Artifact |
Field type for versioning datasets, models, and other files. |
FileSet |
Used to hold a larger number of files when access to a single file is rare. |
StringSet |
Used to interact with a Neptune object's tags. |
Handler |
Obtained when you access a field path that doesn't exist yet. |
Namespace handler | Obtained when you access a field path that is one or more levels higher than an existing field. Used as a shortcut for accessing fields by specifying only a relative path within the namespace. Example: |
Simple types#
Field type representing a floating-point number, integer, Boolean value, text string, or datetime value.
You can use the following general recipe to assign values of these types to a field in a Neptune object:
Similarly, each of these types support fetching with a simple fetch()
method:
import neptune
run = neptune.init_run(with_id="EXISTING-RUN-ID", mode="read-only")
value = run["field_name"].fetch()
Detailed reference for each supported type:
Series fields#
A series field collects a sequence of values into a single field. You create a series with the append()
function. Values are added to a series field iteratively: each append()
call adds a new value to the sequence.
You can also append multiple values in a single call with the extend()
function.
The following Series
types are supported:
Field type | How to create | Where to view |
---|---|---|
FileSeries |
run["field"].append(<file_like_object>) |
Images dashboard and image gallery widget |
FloatSeries |
run["field"].append(<float>) |
Experiments table, Charts dashboard, or Chart/Value list widget |
StringSeries |
run["field"].append(<str>) |
Experiments table or Value list widget |
Learn more
Complex types#
Complex field types tend to hold more complicated metadata structures and expose several methods.
See each subsection for details:
Artifact
(created withtrack_files()
). Tracks the version of datasets, models, and other files, without uploading contents.File
(created withupload()
). Uploads the contents of a single file to Neptune.FileSet
(created withupload_files()
). Uploads the contents of a directory or several files to Neptune.StringSet
(created if tags or group tags are assigned to the run or model object). Used to interact with the tagset.
Special types#
The following types don't expose any methods or are otherwise not used for manual metadata logging.
GitRef
: contains metadata on the local Git repository.Handler
: an interim type that's returned when you access an empty field path.RunState
: contains the information of whether the Neptune object is actively running or not.Table
: an interim type returned by table-fetching methods.
Artifact
#
Related
Field type for holding datasets, models, and other artifacts.
The artifact can refer to either a single file or a collection of files. Examples:
- Dataset in CSV format
- Folder with training images as PNG files
- Model binaries
Artifacts are useful especially if the files are large and you want to version and compare them between runs, but don't want them taking up storage space in Neptune.
Neptune tracks the following artifact metadata:
- Version (MD5 hash)
- Location (path)
- Size
- Folder structure
- Last modified time
How the hash is calculated
The following components are used to calculate the hash of each artifact:
- Hash of contents
- Last modification time
- Size
- Storage type (local or S3)
- Path in artifact
- Original location (URI for S3 or path to local file)
When anything in the above changes, the hash changes as well. This includes adding a file to the artifact.
Assignment: =
#
Convenience alias for assign()
.
assign()
#
Assigns the provided artifact object to the field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
ArtifactVal |
- | Object obtained with fetch() from an existing artifact field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
import neptune
run = neptune.init_run()
run["datasets/images"].track_files("./images")
run["datasets/images-copy"] = run["datasets/images"].fetch()
download()
#
Downloads all the files that are referenced in the artifact field.
Neptune looks for each file at the path which was logged originally. If the artifact points to an object stored in S3 or GCS, it downloads the object to the local system directly from the remote storage.
Note for Windows
- On Windows, this method creates symbolic links to the referenced files.
- In order to copy the file references in your local system, you may need to run your terminal as administrator.
Parameters
Name | Type | Default | Description |
---|---|---|---|
destination |
str , optional |
None |
Path to where the files should be downloaded. If None , the files are downloaded to the current working directory.
|
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
Examples
import neptune
run = neptune.init_run(
with_id="NER-2", # (1)!
mode="read-only",
)
run["artifacts/images"].download(destination="datasets/train/images")
- Neptune ID of a run that has an artifact stored at the field path
artifacts/images
fetch()
#
Fetches the artifact at the accessed field path.
You should use fetch()
only to copy the artifact object.
Returns
ArtifactVal
object stored in the field.
Examples
import neptune
run = neptune.init_run() # Neptune ID becomes NER-2
run["datasets/images"].track_files("...")
old_run = neptune.init_run(
with_id="NER-2", # ID of the previous Neptune run which has the artifact
mode="read-only",
)
new_run = neptune.init_run()
new_run["datasets/images-copy"] = old_run["datasets/images"].fetch()
fetch_files_list()
#
Fetches a list of artifact files.
Returns
List of ArtifactFileData
objects for all the files referenced in the artifacts.
You can use the following fields of the ArtifactFileData
object:
Name | Type | Description |
---|---|---|
"file_hash" |
str |
Hash of the file |
"file_path" |
str |
Path of the file, relative to the root of the virtual artifact directory |
"size" |
int |
Size of the file (kB) |
"metadata" |
dict |
Dictionary with the following keys:
|
Examples
>>> import neptune
>>> run = neptune.init_run(with_id="NER-2", mode="read-only") # (1)!
>>> artifact_list = run["artifacts/images"].fetch_files_list()
>>> artifact_list[0].file_hash
'b34affafe1ce65908c9b34631aa2986fa8d0a6a0'
>>> artifact_list[0].file_path
'val.csv'
>>> artifact_list[0].metadata["last_modified"]
'2022-04-22 09:53:53'
>>> artifact_list[0].metadata["file_path"]
'file://C:/Users/Jackie/repos/llm-project/datasets/val.csv'
- Neptune ID of a run that has an artifact stored at the field path
artifacts/images
fetch_hash()
#
Fetches the hash of the artifact.
Returns
str
: Hash of the Neptune artifact.
Examples
>>> import neptune
>>> run = neptune.init_run(
>>> with_id="NER-2", # (1)!
>>> mode="read-only",
>>> )
>>> run["artifacts/images"].fetch_hash()
'9a113b799082e5fd628be178bedd52837ba12eb9fdec24e9175babd0f6f9d28s'
- Neptune ID of a run that has an artifact stored at the field path
artifacts/images
track_files()
#
Saves the following artifact metadata to Neptune:
- Version (MD5 hash)
- Location (path)
- Size
- Folder structure
- Contents
Works for files, folders, or S3-compatible storage.
Parameters
Name | Type | Default | Description |
---|---|---|---|
path |
str |
- | File path or S3-compatible path to the file or folder that you want to track. |
destination |
str |
None |
Location in the Neptune artifact namespace where you want to log the metadata. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
For more detailed examples, see Tracking artifacts.
Boolean
#
Assignment: =
or assign()
#
Assigns the provided integer to the field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
Boolean |
- | Value to assign to the field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
run = neptune.init_run()
# You can use the Python assign operator (=)
run["params/use_preprocessing"] = True
# as well as the assign() method
run["params/use_preprocessing"].assign(True)
fetch()
#
Returns the field value from the Neptune servers.
Example
# Initialize existing run with ID "NER-12"
run = neptune.init_run(with_id="NER-12", mode="read-only")
# Fetch use_proprocessing parameter
use_preprocessing = run["params/use_preprocessing"].fetch()
Datetime
#
Assignment: =
or assign()
#
Assigns the provided integer to the field.
You can use the Python datetime library to express and assign dates and times.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
datetime object |
- | Values to assign to the field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
Log the exact end of training:
fetch()
#
Returns the field value from the Neptune servers.
Example
# Initialize existing run with ID "NER-12"
run = neptune.init_run(with_id="NER-12", mode="read-only")
# Fetch the time when the training ended
eval_start = run["train/end"].fetch()
File
#
Holds a single file of any type.
See also: Upload files
upload()
#
Uploads the provided file under the specified field path.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
str or File |
- | Path of the file to upload, or File value object |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
import neptune
run = neptune.init_run()
# Upload example data
run["dataset/data_sample"].upload("sample_data.csv")
Both the content and the extension is stored. When downloaded, by default, the filename is a combination of the path and extension:
Many types are implicitly converted to File on the fly. For example, image-like objects such as Matplotlib figures:
- The
gcf()
function returns the Matplotlib figure object.
Assignment: =
#
Convenience alias for assign()
.
assign()
#
You can upload a file by assigning the File
value object to the specified field path.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
File |
- | File value object |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Example
import neptune
from neptune.types import File
run = neptune.init_run()
run["dataset/data_sample"] = File("sample_data.csv")
download()
#
Downloads the stored file to the working directory or specified destination.
Parameters
Name | Type | Default | Description |
---|---|---|---|
destination |
str , optional |
None |
Path to where the file should be downloaded. If None , the file is downloaded to the working directory.
|
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
Examples
run["trained_model"].download(destination="path/to/destination")
Unless disabled, you can fetch uncommitted changes from the "source_code/diff"
field.
This downloads the diff.patch
file into the working directory, which you can then apply as needed.
fetch_extension()
#
A programmatic way to find out the extension of a stored file.
Returns
str
with the extension of the stored file.
Examples
import neptune
from neptune.types import File
run = neptune.init_run()
# Upload model as a File field
run["model/last"].upload("model.pt")
# Check extension of the uploaded File
ext = run["model/last"].fetch_extension()
ext == "pt" # True
as_image()
#
Static method for converting image objects or image-like objects to an image File
value object.
This way you can upload figures, arrays, and tensors as static images.
Name | Type | Default | Description |
---|---|---|---|
image |
- | - | Image-like object to be converted. Supported: PyTorch tensors, TensorFlow/Keras tensors, NumPy arrays, PIL images, Matplotlib, Seaborn figures. |
autoscale |
bool |
True |
Whether Neptune should try to scale image pixel values to better render them in the web app. Scaling can distort images if their pixels lie outside the [0.0, 1.0] or [0, 255] range. To disable auto-scaling, set the argument to |
Returns
File
value object with converted image.
Examples
run["train/prediction_example"].upload(File.as_image(numpy_array))
pil_file = File.as_image(pil_image)
run["dataset/data_sample/img1"].upload(pil_file)
You can also upload a PIL image without explicit conversion:
as_html()
#
Converts an object to an HTML File
value object.
This way you can upload interactive charts or data frames to explore them in the Neptune app.
Name | Description |
---|---|
chart |
Object to be converted. Supported:
|
from scikitplot.metrics import plot_roc
import matplotlib.pyplot as plt
import neptune
from neptune.types import File
fig, ax = plt.subplots(figsize=(16, 12))
plot_roc(y, y_pred, ax=ax)
run = neptune.init_run()
run["roc_curve"].upload(File.as_html(fig))
as_pickle()
#
Pickles a Python object and stores it in a File
value object.
This way you can upload any Python object for future use.
Name | Description |
---|---|
obj |
Object to be converted (any Python object that supports pickling). |
Returns
File
value object with the pickled object.
Examples
import neptune
from neptune.types import File
run = neptune.init_run()
run["results/pickled_model"].upload(File.as_pickle(trained_model))
from_content()
#
Factory method for creating File
value objects directly from binary and text content.
UTF-8 encoding is used for text content.
Parameters
Name | Type | Default | Description |
---|---|---|---|
content |
str or bytes |
- | Text or binary content to stored in the File value object. |
extension |
str , optional |
None |
Extension of the created file that will be used for interpreting the type of content for visualization. If None , it will be bin for binary content and txt for text content. |
Returns
File
value object created from the content.
Example
import neptune
from neptune.types import File
run = neptune.init_run()
run["large_text_as_file"].upload(File.from_content(variable_with_my_text))
html_str = (
"<button type='button', style='background-color:#005879; width:400px; "
"height:400px; font-size:30px'> <a style='color: #ccc', "
"href='https://docs.neptune.ai'>Take me back to the docs!<a> </button>"
)
with open("sample.html", "w") as f:
f.write(html_str)
html_obj = File.from_content(html_str, extension="html")
run["html_content"].upload(html_obj)
from_path()
#
Creates a File value object from a given path.
Equivalent to File(path)
, but you can specify the extension separately.
Parameters
Name | Type | Default | Description |
---|---|---|---|
path |
str or bytes |
- | Path of the file to be stored in the File value object. |
extension |
str , optional |
None |
Extension of the file, if not included in the path argument. |
Returns
File
value object created based on the path.
Example
import neptune
from neptune.types import File
run = neptune.init_run()
run["sample_text"].upload(File.from_path(path="data/test/sample", extension="txt"))
from_stream()
#
Factory method for creating File
value objects directly from binary and text streams.
UTF-8 encoding is used for text content.
Parameters
Name | Type | Default | Description |
---|---|---|---|
stream |
IOBase |
- | Stream to be converted. |
seek |
int , optional |
0 |
Change the stream position to the given byte offset. For details, see the IOBase documentation . |
extension |
str , optional |
None |
Extension of the created file that will be used for interpreting the type of content for visualization. If None , it will be bin for binary content and txt for text content. |
Returns
File
value object created from the stream.
Example
import neptune
from neptune.types import File
run = neptune.init_run()
with open("image.jpg", "rb") as f:
image = File.from_stream(f, extension="jpg")
run["upload_image_from_stream"].upload(image)
FileSeries
#
A field containing a series of image files.
append()
#
Logs the provided file to the end of the series.
append()
replaces log()
As of neptune-client 0.16.14
, append()
and extend()
are the preferred methods for logging series of values.
You can upgrade your installation with pip install -U neptune-client
or continue using log()
.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
File value object |
- | The file to be appended. |
step |
float , int , optional |
None |
Index of the log entry being appended. Must be strictly increasing. |
timestamp |
float , int , optional |
None |
Time index of the log entry being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
name |
str , optional |
None |
Name of the logged file. |
description |
str , optional |
None |
Short description of the logged file. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
Append an image file to a FileSeries
field:
import neptune
from neptune.types import File
run = neptune.init_run()
run["train/prediction_example"].append(File(path_to_file))
Log a Matplotlib figure object:
Convert a NumPy array to File
value object and log it:
You can also log a name and description for the image:
for plt_image, class_name in data_sample:
run["data/sample"].append(plt_image, name=class_name)
for image, y_pred in zip(x_test_sample, y_test_sample_pred):
description = "\n".join(
[f"class {i}: {pred}" for i, pred in enumerate(y_pred)]
)
run["train/predictions"].append(image, description=description)
Logging with custom step:
import matplotlib.pyplot as plt
run = neptune.init_run()
for epoch in range(100):
plt_fig = get_histogram()
run["train/distribution"].append(
plt_fig,
step=epoch,
)
extend()
#
Appends the provided collection of File
value objects to the series.
Parameters
Name | Type | Default | Description |
---|---|---|---|
values |
collection of File value objects |
- | The collection or dictionary of files to be appended to the series field. |
steps |
collection of float or collection of int (optional) |
None |
Indices of the values being appended. Must be strictly increasing. |
timestamps |
collection of float or collection of int (optional) |
None |
Time indices of the values being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
log()
#
See append()
(append one value at a time) or extend()
(append a collection of values).
download()
#
Downloads all the files stored in the series and saves them locally.
Parameters
Name | Type | Default | Description |
---|---|---|---|
destination |
str , optional |
None |
Path to where the files should be downloaded. If None , the files are downloaded to the working directory.
|
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
download_last()
#
Downloads the last file stored in the series and saves it locally.
Parameters
Name | Type | Default | Description |
---|---|---|---|
destination |
str , optional |
None |
Path to where the file should be downloaded. If None , the file is downloaded to the working directory.
|
FileSet
#
Field type used to hold a larger number of files when access to a single file is rare.
Best used for items that can be easily browsed through the web interface and are typically accessed as a whole, such as a folder of source files or image examples.
See also: Upload files
upload_files()
#
Uploads the provided file or files and stores them under the FileSet
field.
Useful when you don't require advanced display options for individual files.
Parameters
Name | Type | Default | Description |
---|---|---|---|
globs |
str or collection of str |
- | Path or paths to the files to be uploaded. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Example
delete_files()
#
Deletes the specified files from the FileSet
field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
paths |
str or collection of str |
- | Path or paths to files or folders to be deleted. Note that these are paths relative to the FileSet itself: If the |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Example
Deleting a file from a FileSet
field of an existing run:
>>> import neptune
>>> run = neptune.init_run(with_id="CLAS-14", capture_hardware_metrics=False, capture_stderr=False, capture_stdout=False, capture_traceback=False, git_ref=False)
>>> run["datasets_folder"].delete_files("datasets/data_sample_v1.csv")
download()
#
Downloads all the files stored in the FileSet
field in the form of a ZIP archive.
Parameters
Name | Type | Default | Description |
---|---|---|---|
destination |
str , optional |
None |
Path to where the files should be downloaded. If None , the files are downloaded to the working directory.
|
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
list_fileset_files()
#
Fetches metadata about the file set.
If the top-level artifact of the field is a directory, only metadata about this directory is returned.
You can use the path
argument to list metadata about files contained inside the directory or subdirectories.
Parameters
Name | Type | Default | Description |
---|---|---|---|
path |
str , optional |
None |
Path to a nested directory, to get metadata about files contained within the directory. |
Returns
List of FileEntry
items with the following metadata:
name
: Name of the file or directorysize
: Size of the file in bytesmtime
: Last modification timefile_type
: Whether it's a file or directory
Examples
In this example, we're using a glob pattern to upload a set of Python script files under one field.
>>> import neptune
>>> run = neptune.init_run()
>>> run["scripts"].upload_files("*.py")
>>> run["scripts"].list_fileset_files()
[FileEntry(name="script.py", size=13935, mtime=datetime.datetime(2023, 8, 8, 10,
53, 7, 387000, tzinfo=tzutc()), file_type="file"), FileEntry(name=
"another_script.py", size=13935, mtime=datetime.datetime(2023, 8, 8, 10, 53, 16,
387000, tzinfo=tzutc()), file_type="file"), ...]
In the next example, we're uploading a directory called "data" which has the following structure:
We'd log the folder with the following:
Then we can access the metadata of the FileSet and its nested directories as follows:
>>> run["data"].list_fileset_files()
[FileEntry(name='data', size=None, mtime=datetime.datetime(2023, 8, 17, 10, 31, 54,
278601, tzinfo=tzutc()), file_type='directory')]
>>> run["data"].list_fileset_files(path="data")
[FileEntry(name='datasets', size=None, mtime=datetime.datetime(2023, 8, 17, 10, 34,
6, 777017, tzinfo=tzutc()), file_type='directory'), FileEntry(name='sample.csv',
size=215, mtime=datetime.datetime(2023, 8, 17, 10, 31, 26, 402000, tzinfo=tzutc()),
file_type='file')]
>>> run["data"].list_fileset_files(path="data/datasets")
[FileEntry(name='dataset_v2.csv', size=215, mtime=datetime.datetime(2023, 8, 17, 10,
31, 26, 491000, tzinfo=tzutc()), file_type='file'), FileEntry(name='dataset_v3.csv',
size=215, mtime=datetime.datetime(2023, 8, 17, 10, 31, 26, 338000, tzinfo=tzutc()),
file_type='file'), ...]
Float
#
Assignment: =
or assign()
#
Assigns the provided floating point number to the field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
float |
- | Value to assign to the field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
run = neptune.init_run()
# You can use the Python assign operator (=)
run["params/lr"] = 0.8
# as well as the assign() method
run["params/lr"].assign(0.8)
fetch()
#
Returns the field value from the Neptune servers.
Example
# Initialize existing run with ID "NER-12"
run = neptune.init_run(with_id="NER-12", mode="read-only")
# Fetch highest accuracy so far
top_acc = run["train/acc/highest"].fetch()
FloatSeries
#
Field containing a series of numbers, for example:
- Training metrics
- Change of performance of the model in production
You can index the series by step or by time.
append()
#
Appends the provided value to the series.
append()
replaces log()
As of neptune-client 0.16.14
, append()
and extend()
are the preferred methods for logging series of values.
You can upgrade your installation with pip install -U neptune-client
or continue using log()
.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
float or int |
- | The value to be added to the series field. |
step |
float , int , optional |
None |
Index of the log entry being appended. Must be strictly increasing. Note: This is effectively how you set custom x values for a chart. |
timestamp |
float , int , optional |
None |
Time index of the log entry being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Related
Examples
run = neptune.init_run()
for epoch in range(parameters["n_epochs"]):
... # My training loop
run["train/epoch/loss"].append(loss)
run["train/epoch/accuracy"].append(acc)
Setting custom step values:
You can also append values to multiple series at once by passing a dictionary of values. Pass the field name as the key.
extend()
#
Appends the provided collection of values to the series.
Parameters
Name | Type | Default | Description |
---|---|---|---|
values |
collection of float or collection of int |
- | The collection or dictionary of values to be appended to the series field. |
steps |
collection of float or collection of int (optional) |
None |
Indices of the values being appended. Must be strictly increasing. Note: This is effectively how you set custom x values for a chart. |
timestamps |
collection of float or collection of int (optional) |
None |
Time indices of the values being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Example
The following example reads a CSV file into a pandas DataFrame and extracts the values to create a Neptune series field.
df = pandas.read_csv("time_series.csv")
ys = df["value"]
ts = df["timestamp"]
run = neptune.init_run()
run["data/example_series"].extend(ys, timestamps=ts)
log()
#
See append()
(append one value at a time) or extend()
(append a collection of values).
fetch_last()
#
Fetches the last value stored in the series.
Returns
Last logged float
value.
Example
>>> import neptune
>>> run = neptune.init_run(with_id="CLS-15", mode="read-only")
>>> run["train/loss"].fetch_last()
0.15250000000000002
fetch_values()
#
Fetches all values stored in the series.
Parameters
Name | Type | Default | Description |
---|---|---|---|
include_timestamp |
Boolean , optional |
True |
Whether the fetched data should include the timestamp field. |
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
Returns
pandas.DataFrame
containing all the values and their indexes stored in the series field.
Example
>>> import neptune
>>> run = neptune.init_run(with_id="CLS-15", mode="read-only")
>>> run["train/loss"].fetch_values()
step value timestamp
0 0.0 0.00 2022-07-08 12:30:30.087
1 1.0 0.43 2022-07-08 12:30:30.087
2 2.0 0.86 2022-07-08 12:30:30.087
3 3.0 1.29 2022-07-08 12:30:30.087
...
GitRef
#
Related
Contains information about the Git repository at the time of starting a tracked run.
The GitRef
type doesn't expose any methods, but you can view the source_code/git
field in the Neptune web app (
Source code → Git).
Parameters
Name | Description |
---|---|
repository_path |
Custom path where Neptune should look for a Git repository. Neptune looks for a repository in the supplied path as well as its parent directories. If not provided, the path to the script that is currently executed is used. |
Returns
GitRef
value object with the Git information. For details, see Logging Git info.
Example
import neptune
from neptune.types import GitRef
run = neptune.init_run(git_ref=GitRef(repository_path="/path/to/repo"))
DISABLED
#
Constant that disables Git tracking for your run.
Tip
You can also disable Git tracking by setting the gitref
argument to false
when initializing the run:
Integer
#
Assignment: =
or assign()
#
Assigns the provided integer to the field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
int |
- | Value to assign to the field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
run = neptune.init_run()
# You can use the Python assign operator (=)
run["params/max_epochs"] = 10
# as well as the assign() method
run["params/max_epochs"].assign(10)
fetch()
#
Returns the field value from the Neptune servers.
Example
# Initialize existing run with ID "NER-12"
run = neptune.init_run(with_id="NER-12", mode="read-only")
# Fetch epoch count
epoch = run["train/epoch"].fetch()
RunState
#
Contains the state (Active
/Inactive
) of a Neptune run.
- You cannot manually create or modify
RunState
fields. - The
RunState
type doesn't expose any methods, but you can:- view the
sys/state
field in the Neptune web app - query runs based on state with
project.fetch_runs_table(state="<state>")
- view the
Related
Learn more: System namespace: State
String
#
Assignment: =
or assign()
#
Assigns the provided string to the field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
str |
- | Value to assign to the field. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
run = neptune.init_run()
# You can use the Python assign operator (=)
run["params/optimizer"] = "Adam"
# as well as the assign() method
run["params/optimizer"].assign("Adam")
Note
Due to a technical limitation, only 1000 characters are indexed in String fields. This means that when searching the experiments table, only the first 1000 characters are considered.
fetch()
#
Returns the field value from the Neptune servers.
Example
# Initialize existing run with ID "NER-12"
run = neptune.init_run(with_id="NER-12", mode="read-only")
# Fetch optimizer parameter
optimizer = run["params/optimizer"].fetch()
StringSeries
#
Field containing series of text values.
Info
A single text log entry is limited to 1000 characters, but there is no limit to the number of entries in the series.
If the logged text exceeds this character limit, the entry will be truncated to match the limit.
append()
#
Appends the provided value to the end of the series.
append()
replaces log()
As of neptune-client 0.16.14
, append()
and extend()
are the preferred methods for logging series of values.
You can upgrade your installation with pip install -U neptune-client
or continue using log()
.
Parameters
Name | Type | Default | Description |
---|---|---|---|
value |
str |
- | The value to be logged. |
step |
float , int , optional |
None |
Index of the log entry being appended. Must be strictly increasing. Note: This is effectively how you set custom x values for a chart. |
timestamp |
float , int , optional |
None |
Time index of the log entry being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Example
Iteratively log a series of short text entries (max 1000 characters):
extend()
#
Appends the provided collection of values to the series.
Parameters
Name | Type | Default | Description |
---|---|---|---|
values |
collection of str |
- | The collection or dictionary of strings to be appended to the series field. |
steps |
collection of float or collection of int (optional) |
None |
Indices of the values being appended. Must be strictly increasing. Note: This is effectively how you set custom x values for a chart. |
timestamps |
collection of float or collection of int (optional) |
None |
Time indices of the values being appended, in Unix time format. If None , the current time (obtained with time.time() ) is used. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
log()
#
See append()
(append one value at a time) or extend()
(append a collection of values).
fetch_last()
#
Fetches the last value stored in the series.
Returns
Last logged str
value.
Example
import neptune
run = neptune.init_run(with_id="CLS-15", mode="read-only")
last_token = run["train/tokens"].fetch_last()
fetch_values()
#
Fetches all values stored in the series.
Parameters
Name | Type | Default | Description |
---|---|---|---|
include_timestamp |
Boolean , optional |
True |
Whether the fetched data should include the timestamp field. |
progress_bar |
bool or Type[ProgressBarCallback] , optional |
None |
Set to False to disable the download progress bar, or pass a type of ProgressBarCallback to use your own progress bar. If set to None or True , the default tqdm-based progress bar will be used. |
Returns
pandas.DataFrame
containing all the values and their indexes stored in the series field.
Example
import neptune
run = neptune.init_run(with_id="CLS-15", mode="read-only")
tokens = run["train/tokens"].fetch_values()
StringSet
#
Related
A field containing an unorganized set of strings.
The supported StringSet fields are sys/tags
and sys/group_tags
.
You can't manually create or modify other StringSet fields.
add()
#
Adds the provided text string or strings to the set field.
Parameters
Name | Type | Default | Description |
---|---|---|---|
values |
str or collection of str |
- | Tag or tags to be applied. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
import neptune
run = neptune.init_run(
with_id="CLS-8", # (1)!
capture_hardware_metrics=False,
capture_stderr=False,
capture_stdout=False,
capture_traceback=False,
git_ref=False,
)
run["sys/tags"].add(["maskRCNN", "finetune"])
remove()
#
Removes the provided tag or tags from the set.
Parameters
Name | Type | Default | Description |
---|---|---|---|
values |
str or collection of str |
- | Tag or tags to be removed. |
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
import neptune
run = neptune.init_run(
with_id="CLS-8", # (1)!
capture_hardware_metrics=False,
capture_stderr=False,
capture_stdout=False,
capture_traceback=False,
git_ref=False,
)
run["sys/tags"].remove(["finetune"])
- Connect to an existing run and disable auto-logging system metrics
clear()
#
Removes all tags from the set.
Parameters
Name | Type | Default | Description |
---|---|---|---|
wait |
Boolean , optional |
False |
By default, logging calls and other Neptune operations are periodically synchronized with the server in the background. If True , Neptune first waits to complete any queued operations, then executes the call and continues script execution. See Connection modes. |
Examples
import neptune
run = neptune.init_run(
with_id="CLS-8", # (1)!
capture_hardware_metrics=False,
capture_stderr=False,
capture_stdout=False,
capture_traceback=False,
git_ref=False,
)
run["sys/tags"].clear()
- Connect to an existing run and disable auto-logging system metrics
fetch()
#
Fetches all tags from the set.
Returns
set
of str
with an object's tags.
Example
import neptune
run = neptune.init_run(with_id="NLI-8", mode="read-only") # (1)!
run_tags = run["sys/group_tags"].fetch()
if "maskRCNN" in run_tags:
print_analysis()
- Connect to an existing run
Table
#
An interim object containing the metadata of fetched objects. It's returned by the table-fetching methods: fetch_runs_table()
, fetch_models_table()
, and fetch_model_versions_table()
.
To access the data, you need to convert it to a pandas DataFrame by invoking to_pandas()
.
Example
Fetch an experiments table and convert it to a DataFrame:
import neptune
project = neptune.init_project(project="ml-team/nli", mode="read-only")
runs_table = project.fetch_runs_table() # a Table is returned
runs_table_df = runs_table.to_pandas()
Now you can operate on the table like a DataFrame.
to_pandas()
#
Converts a Table
object to a pandas DataFrame.
Returns
pandas.DataFrame
with the metadata of the objects contained in the table.
Example
>>> import neptune
>>> project = neptune.init_project(..., mode="read-only")
[neptune] [info ] Neptune initialized...
>>> runs_df = project.fetch_runs_table().to_pandas()
>>> print(runs_df)
sys/creation_time sys/id ... val/acc ...
0 2022-08-26 05:19:54.712000+00:00 CLS-12 ... 0.98 ...
1 2022-08-26 05:19:17.197000+00:00 CLS-11 ... 0.53 ...
2 2022-08-26 05:19:01.999000+00:00 CLS-10 ... 0.19 ...
3 2022-08-26 05:18:42.380000+00:00 CLS-9 ... 0.35 ...
Handler
#
When you access a path that doesn't exist yet, you obtain a Handler
object.
import neptune
run = neptune.init_run()
handler = run["train/batch/acc"]
# no such field exists in the run yet, so a Handler object is returned
Think of it as a wildcard that can become any type once you invoke a specific logging method on it. If you invoke track_files()
, it becomes an Artifact
field; if you invoke append(float)
, it becomes a FloatSeries
.
Note
A Handler
object can also become a namespace handler if you create a field at a lower hierarchy level.
The Handler
object exposes:
- All logging methods – such as
assign()
,append()
,upload()
, andupload_files()
- Namespace handler methods
Kedro note
The Handler
class is located in neptune.handler.Handler
. When setting up nodes.py
, pass this value to the neptune_run
option of the evaluate_model()
function:
Examples
import neptune
run = neptune.init_run()
handler_object = run["train/batch/acc"]
# Returns a Handler, as no such field exists in the run yet
You can use the handler like any other field:
Namespace handler#
An object representing a namespace in the metadata structure of a Neptune object.
You can think of namespaces as folders for organizing your metadata. If you log the fields "params/max_epochs"
and "params/lr"
, they will be grouped under a namespace called "params". In this case, accessing run["params"]
would return a namespace handler object:
import neptune
run = neptune.init_run()
run["params/max_epochs"] = 100
namespace_handler = run["params"]
The namespace handler exposes similar methods as other Neptune objects – however, all field paths are relative to the namespace. This helps organize metadata from different steps into separate namespaces, yet under the same run. For a full guide, see Setting a base namespace.
You can also start by creating a generic Handler
object and turn it into a namespace by organizing metadata inside it:
params_ns = run["params"] # Create a namespace handler
params_ns["max_epochs"] = 20 # Log directly to the namespace
params_ns["batch_size"] = 32 # (1)!
-
The result is the same as if the following had been called:
Field lookup: []
#
You can access the field of a namespace handler through a dict-like field lookup: namespace_handler[field_path]
.
This way, you can
-
log metadata and collect everything in a single base namespace (without needing to spell out the full path each time):
-
fetch already logged metadata from a particular namespace:
Returns
The returned type depends on the field type and whether a field is stored under the given path.
Path | Example | Returns |
---|---|---|
Field exists | - | The returned type matches the type of the field |
Field doesn't exist | - | Handler object |
Path is namespace and has field | Path: Field |
Namespace handler object |
Examples
import neptune
run = neptune.init_run()
train_ns = run["train"]
train_ns["params/learning_rate"] = 0.3 # (1)!
- Stores
0.3
under the path "train/params/learning_rate" inside the run.
If there exists a value at a nested field, you can also obtain a namespace handler by accessing any of the containing namespaces.
>>> run.exists("parameters/learning_rate")
True
>>> params_ns = run["parameters"]
>>> params_ns
<Namespace field at "parameters">
Assignment: =
#
Convenience alias for assign()
.
assign()
#
Assign values to multiple fields from a dictionary. For example, you can use this method to quickly log all the parameters of a run.
Remember that the resulting paths will be a combination of the namespace path and the provided relative path.
Example
import neptune
run = neptune.init_run()
# Access params namespace handler
params_ns = run["model/params"]
# Assign additional parameters in batch
PARAMS = {"max_epochs": 20, "optimizer": "Adam"}
params_ns.assign(PARAMS)
# "=" needs to be used in combination with "[]"
params_ns = PARAMS # Doesn't work
run["model/params"] = PARAMS # Works
# This also works with relative paths
model_ns = run["model"]
model_ns["params"] = PARAMS
get_root_object()
#
Returns the root level object of a namespace handler.
For example, if you call the method on the namespace of a run, the Run
object is returned.
Example
import neptune
run = neptune.init_run()
run["workflow/pretraining/params"] = {...}
...
pretraining = run["workflow/pretraining"] # (1)!
pretraining.stop() # Error: pretraining is a namespace, not a run
pretraining_run = pretraining.get_root_object()
pretraining_run.stop() # The root run is stopped
- The namespace "pretraining" is nested under the "workflow" namespace inside the run object. As such, the
pretraining
object is a namespace handler object.