Spike Interface#

Interface with spikeinterface#

add_recording_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict | None = None, write_as: Literal['raw', 'processed', 'lfp'] = 'raw', es_key: str | None = None, iterator_type: str = 'v2', iterator_options: dict | None = None, iterator_opts: dict | None = None, always_write_timestamps: bool = False)[source]#

Adds traces from recording object as ElectricalSeries to an NWBFile object.

Parameters:
  • recording (SpikeInterfaceRecording) – A recording extractor from spikeinterface

  • nwbfile (NWBFile) – nwb file to which the recording information is to be added

  • metadata (dict, optional) – metadata info for constructing the nwb file. Should be of the format:

    metadata['Ecephys']['ElectricalSeries'] = dict(
        name=my_name,
        description=my_description
    )
    
  • write_as ({‘raw’, ‘processed’, ‘lfp’}) – How to save the traces data in the nwb file. Options: - ‘raw’: save it in acquisition - ‘processed’: save it as FilteredEphys, in a processing module - ‘lfp’: save it as LFP, in a processing module

  • es_key (str, optional) – Key in metadata dictionary containing metadata info for the specific electrical series

  • iterator_type ({“v2”, None}, default: ‘v2’) – The type of DataChunkIterator to use. ‘v2’ is the locally developed SpikeInterfaceRecordingDataChunkIterator, which offers full control over chunking. None: write the TimeSeries with no memory chunking.

  • iterator_options (dict, optional) – Dictionary of options for the iterator. See https://hdmf.readthedocs.io/en/stable/hdmf.data_utils.html#hdmf.data_utils.GenericDataChunkIterator for the full list of options.

  • iterator_opts (dict, optional) – Deprecated. Use ‘iterator_options’ instead.

  • always_write_timestamps (bool, default: False) – Set to True to always write timestamps. By default (False), the function checks if the timestamps are uniformly sampled, and if so, stores the data using a regular sampling rate instead of explicit timestamps. If set to True, timestamps will be written explicitly, regardless of whether the sampling rate is uniform.

Notes

Missing keys in an element of metadata[‘Ecephys’][‘ElectrodeGroup’] will be auto-populated with defaults whenever possible.

add_sorting_to_nwbfile(sorting: BaseSorting, nwbfile: NWBFile | None = None, unit_ids: list[str] | list[int] | None = None, property_descriptions: dict | None = None, skip_properties: list[str] | None = None, write_as: Literal['units', 'processing'] = 'units', units_name: str = 'units', units_description: str = 'Autogenerated by neuroconv.', waveform_means: ndarray | None = None, waveform_sds: ndarray | None = None, unit_electrode_indices: list[list[int]] | None = None, null_values_for_properties: dict | None = None)[source]#

Add sorting data (units and their properties) to an NWBFile.

This function serves as a convenient wrapper around add_units_table to match Spikeinterface’s SortingExtractor

Parameters:
  • sorting (BaseSorting) – The SortingExtractor object containing unit data.

  • nwbfile (pynwb.NWBFile, optional) – The NWBFile object to write the unit data into.

  • unit_ids (list of int or str, optional) – The specific unit IDs to write. If None, all units are written.

  • property_descriptions (dict, optional) – Custom descriptions for unit properties. Keys should match property names in sorting, and values will be used as descriptions in the Units table.

  • skip_properties (list of str, optional) – Unit properties to exclude from writing.

  • write_as ({‘units’, ‘processing’}, default: ‘units’) –

    Where to write the unit data:
    • ‘units’: Write to the primary NWBFile.units table.

    • ‘processing’: Write to the processing module (intermediate data).

  • units_name (str, default: ‘units’) – Name of the Units table. Must be ‘units’ if write_as is ‘units’.

  • units_description (str, optional) – Description for the Units table (e.g., sorting method, curation details).

  • waveform_means (np.ndarray, optional) – Waveform mean (template) for each unit. Shape: (num_units, num_samples, num_channels).

  • waveform_sds (np.ndarray, optional) – Waveform standard deviation for each unit. Shape: (num_units, num_samples, num_channels).

  • unit_electrode_indices (list of lists of int, optional) – A list of lists of integers indicating the indices of the electrodes that each unit is associated with. The length of the list must match the number of units in the sorting extractor.

  • null_values_for_properties (dict of str to Any) – A dictionary mapping properties to their respective default values. If a property is not found in this dictionary, a sensible default value based on the type of sample_data will be used.

add_devices_to_nwbfile(nwbfile: NWBFile, metadata: DeepDict | None = None)[source]#

Add device information to nwbfile object.

Will always ensure nwbfile has at least one device, but multiple devices within the metadata list will also be created.

Parameters:
  • nwbfile (NWBFile) – nwb file to which the recording information is to be added

  • metadata (DeepDict) – metadata info for constructing the nwb file (optional). Should be of the format:

    metadata['Ecephys']['Device'] = [
        {
            'name': my_name,
            'description': my_description
        },
        ...
    ]
    

    Missing keys in an element of metadata[‘Ecephys’][‘Device’] will be auto-populated with defaults.

add_electrode_groups_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict | None = None)[source]#

Deprecated. This will become a private method.

add_electrodes_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict | None = None, exclude: tuple = (), null_values_for_properties: dict | None = None)[source]#

Build an electrode_table from the recording information and it to the nwbfile object.

Parameters:
  • recording (spikeinterface.BaseRecording)

  • nwbfile (NWBFile) – nwb file to which the recording information is to be added

  • metadata (dict) – metadata info for constructing the nwb file (optional). Should be of the format:

    metadata['Ecephys']['Electrodes'] = [
        {
            'name': my_name,
            'description': my_description
        },
        ...
    ]
    

    Note that data intended to be added to the electrodes table of the NWBFile should be set as channel properties in the RecordingExtractor object. Missing keys in an element of metadata[‘Ecephys’][‘ElectrodeGroup’] will be auto-populated with defaults whenever possible. If ‘my_name’ is set to one of the required fields for nwbfile electrodes (id, x, y, z, imp, location, filtering, group_name), then the metadata will override their default values. Setting ‘my_name’ to metadata field ‘group’ is not supported as the linking to nwbfile.electrode_groups is handled automatically; please specify the string ‘group_name’ in this case. If no group information is passed via metadata, automatic linking to existing electrode groups, possibly including the default, will occur.

  • exclude (tuple) – An iterable containing the string names of channel properties in the RecordingExtractor object to ignore when writing to the NWBFile.

  • null_values_for_properties (dict of str to Any) – A dictionary mapping properties to their respective default values. If a property is not found in this dictionary, a sensible default value based on the type of sample_data will be used.

add_recording_as_time_series_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict | None = None, iterator_type: str | None = 'v2', iterator_opts: dict | None = None, always_write_timestamps: bool = False, time_series_name: str | None = None, metadata_key: str = 'TimeSeries')[source]#

Adds traces from recording object as TimeSeries to an NWBFile object.

Parameters:
  • recording (BaseRecording) – A recording extractor from spikeinterface

  • nwbfile (NWBFile) – nwb file to which the recording information is to be added

  • metadata (dict, optional) – metadata info for constructing the nwb file. Should be of the format:

    metadata['TimeSeries'] = {
        'metadata_key': {
            "name": "my_name",
            'description': 'my_description',
            'unit': 'my_unit',
            "offset": offset_to_unit_value,
            "conversion": gain_to_unit_value,
            'comments': 'comments',
            ...
        }
    }
    

    Where the metadata_key is used to look up metadata in the metadata dictionary.

  • metadata_key (str) – The entry in TimeSeries metadata to use.

  • iterator_type ({“v2”, None}, default: ‘v2’) – The type of DataChunkIterator to use. ‘v2’ is the locally developed SpikeInterfaceRecordingDataChunkIterator, which offers full control over chunking. None: write the TimeSeries with no memory chunking.

  • iterator_opts (dict, optional) – Dictionary of options for the iterator. See https://hdmf.readthedocs.io/en/stable/hdmf.data_utils.html#hdmf.data_utils.GenericDataChunkIterator for the full list of options.

  • always_write_timestamps (bool, default: False) – Set to True to always write timestamps. By default (False), the function checks if the timestamps are uniformly sampled, and if so, stores the data using a regular sampling rate instead of explicit timestamps. If set to True, timestamps will be written explicitly, regardless of whether the sampling rate is uniform.

add_recording_as_spatial_series_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict | None = None, metadata_key: str = 'SpatialSeries', write_as: Literal['acquisition', 'processing'] = 'acquisition', iterator_type: str = 'v2', iterator_options: dict | None = None, always_write_timestamps: bool = False)[source]#

Adds traces from recording object as SpatialSeries to an NWBFile object.

This function is designed for behavioral tracking data where the recording represents spatial or directional information (e.g., position, head direction, gaze tracking).

Parameters:
  • recording (BaseRecording) – A recording extractor from spikeinterface containing behavioral tracking data.

  • nwbfile (NWBFile) – NWB file to which the spatial series information is to be added.

  • metadata (dict, optional) – Metadata info for constructing the NWB file. Should be of the format:

    metadata['SpatialSeries'] = {
        'metadata_key': {
            'name': 'my_spatial_series',
            'description': 'my_description',
            'reference_frame': 'origin at top-left corner of arena...',
            'unit': 'meters'
        }
    }
    

    Where the metadata_key is used to look up metadata in the metadata dictionary.

  • metadata_key (str, default: ‘SpatialSeries’) – The entry in SpatialSeries metadata to use.

  • write_as ({‘acquisition’, ‘processing’}, default: ‘acquisition’) – Where to save the spatial series data: - ‘acquisition’: Save in nwbfile.acquisition - ‘processing’: Save in a processing module under ‘behavior’

  • iterator_type ({“v2”, None}, default: ‘v2’) – The type of DataChunkIterator to use. ‘v2’ is the locally developed SpikeInterfaceRecordingDataChunkIterator. None: write the SpatialSeries with no memory chunking.

  • iterator_options (dict, optional) – Dictionary of options for the iterator.

  • always_write_timestamps (bool, default: False) – Set to True to always write timestamps explicitly. By default (False), the function checks if timestamps are uniformly sampled, and if so, stores data using a regular sampling rate.

add_recording_metadata_to_nwbfile(recording: BaseRecording, nwbfile: NWBFile, metadata: dict = None)[source]#

Add device, electrode_groups, and electrodes info to the nwbfile.

Parameters:
  • recording (SpikeInterfaceRecording)

  • nwbfile (NWBFile) – NWB file to which the recording information is to be added

  • metadata (dict, optional) – metadata info for constructing the nwb file. Should be of the format:

    metadata['Ecephys']['Electrodes'] = [
        {
            'name': my_name,
            'description': my_description
        },
        ...
    ]
    

    Note that data intended to be added to the electrodes table of the NWBFile should be set as channel properties in the RecordingExtractor object. Missing keys in an element of metadata['Ecephys']['ElectrodeGroup'] will be auto-populated with defaults whenever possible. If 'my_name' is set to one of the required fields for nwbfile electrodes (id, x, y, z, imp, location, filtering, group_name), then the metadata will override their default values. Setting 'my_name' to metadata field 'group' is not supported as the linking to nwbfile.electrode_groups is handled automatically; please specify the string 'group_name' in this case. If no group information is passed via metadata, automatic linking to existing electrode groups, possibly including the default, will occur.

write_recording_to_nwbfile(recording: BaseRecording, nwbfile_path: Annotated[Path, PathType(path_type=file)] | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, verbose: bool = False, write_as: Literal['raw', 'processed', 'lfp'] = 'raw', es_key: str | None = None, *, iterator_type: str | None = 'v2', iterator_options: dict | None = None, iterator_opts: dict | None = None, backend: Literal['hdf5', 'zarr'] | None = None, backend_configuration: HDF5BackendConfiguration | ZarrBackendConfiguration | None = None, append_on_disk_nwbfile: bool = False) NWBFile | None[source]#

Primary method for writing a RecordingExtractor object to an NWBFile.

Parameters:
  • recording (spikeinterface.BaseRecording)

  • nwbfile_path (FilePath, optional) – Path for where to write or load (if overwrite=False) the NWBFile. If not provided, only adds data to the in-memory nwbfile without writing to disk. Deprecated: Using this function without nwbfile_path is deprecated. Use add_recording_to_nwbfile instead.

  • nwbfile (NWBFile, optional) – If passed, this function will fill the relevant fields within the NWBFile object. E.g., calling:

    write_recording(recording=my_recording_extractor, nwbfile=my_nwbfile)
    

    will result in the appropriate changes to the my_nwbfile object.

  • metadata (dict) – Metadata dictionary for constructing the NWB file. Required when nwbfile is not provided. Must include at minimum the required NWBFile fields (session_description, identifier, session_start_time). Should be of the format:

    metadata['Ecephys'] = {
        'Device': [
            {
                'name': my_name,
                'description': my_description
            },
            ...
        ]
        'ElectrodeGroup': [
            {
                'name': my_name,
                'description': my_description,
                'location': electrode_location,
                'device': my_device_name
            },
            ...
        ]
        'Electrodes': [
            {
                'name': my_name,
                'description': my_description
            },
            ...
        ]
        'ElectricalSeries' = {
            'name': my_name,
            'description': my_description
        }
    

    Note that data intended to be added to the electrodes table of the NWBFile should be set as channel properties in the RecordingExtractor object.

  • overwrite (bool, default: False) – Whether to overwrite the NWBFile if one exists at the nwbfile_path.

  • verbose (bool, default: False) – If ‘nwbfile_path’ is specified, informs user after a successful write operation.

  • write_as ({‘raw’, ‘processed’, ‘lfp’}, optional) – How to save the traces data in the nwb file. - ‘raw’ will save it in acquisition - ‘processed’ will save it as FilteredEphys, in a processing module - ‘lfp’ will save it as LFP, in a processing module

  • es_key (str, optional) – Key in metadata dictionary containing metadata info for the specific electrical series

  • iterator_type ({“v2”, None}) – The type of DataChunkIterator to use. ‘v2’ is the locally developed SpikeInterfaceRecordingDataChunkIterator, which offers full control over chunking. None: write the TimeSeries with no memory chunking.

  • iterator_options (dict, optional) – Dictionary of options for the RecordingExtractorDataChunkIterator (iterator_type=’v2’). Valid options are:

    • buffer_gbfloat, default: 1.0

      In units of GB. Recommended to be as much free RAM as available. Automatically calculates suitable buffer shape.

    • buffer_shapetuple, optional

      Manual specification of buffer shape to return on each iteration. Must be a multiple of chunk_shape along each axis. Cannot be set if buffer_gb is specified.

    • chunk_mbfloat. default: 1.0

      Should be below 1 MB. Automatically calculates suitable chunk shape.

    • chunk_shapetuple, optional

      Manual specification of the internal chunk shape for the HDF5 dataset. Cannot be set if chunk_mb is also specified.

    • display_progressbool, default: False

      Display a progress bar with iteration rate and estimated completion time.

    • progress_bar_optionsdict, optional

      Dictionary of keyword arguments to be passed directly to tqdm. See tqdm/tqdm for options.

  • backend ({“hdf5”, “zarr”}, optional) – The type of backend to use when writing the file. If a backend_configuration is not specified, the default type will be “hdf5”. If a backend_configuration is specified, then the type will be auto-detected.

  • backend_configuration (HDF5BackendConfiguration or ZarrBackendConfiguration, optional) – The configuration model to use when configuring the datasets for this backend. To customize, call the .get_default_backend_configuration(…) method, modify the returned BackendConfiguration object, and pass that instead. Otherwise, all datasets will use default configuration settings.

  • append_on_disk_nwbfile (bool, default: False) – Whether to append to an existing NWBFile on disk. If True, the nwbfile parameter must be None. This is useful for appending data to an existing file without overwriting it.

Returns:

The NWBFile object when writing a new file or using an in-memory nwbfile. Returns None when appending to an existing file on disk (append_on_disk_nwbfile=True). Deprecated: Returning NWBFile in append mode is deprecated and will return None in or after March 2026.

Return type:

NWBFile or None

write_sorting_to_nwbfile(sorting: BaseSorting, nwbfile_path: Annotated[Path, PathType(path_type=file)] | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, verbose: bool = False, unit_ids: list[str | int] | None = None, property_descriptions: dict | None = None, skip_properties: list[str] | None = None, write_as: Literal['units', 'processing'] = 'units', units_name: str = 'units', units_description: str = 'Autogenerated by neuroconv.', waveform_means: ndarray | None = None, waveform_sds: ndarray | None = None, unit_electrode_indices=None, *, backend: Literal['hdf5', 'zarr'] | None = None, backend_configuration: HDF5BackendConfiguration | ZarrBackendConfiguration | None = None, append_on_disk_nwbfile: bool = False) NWBFile | None[source]#

Primary method for writing a SortingExtractor object to an NWBFile.

Parameters:
  • sorting (spikeinterface.BaseSorting)

  • nwbfile_path (FilePath, optional) – Path for where to write or load (if overwrite=False) the NWBFile. If not provided, only adds data to the in-memory nwbfile without writing to disk. Deprecated: Using this function without nwbfile_path is deprecated. Use add_sorting_to_nwbfile instead.

  • nwbfile (NWBFile, optional) – If passed, this function will fill the relevant fields within the NWBFile object. E.g., calling:

    write_recording(recording=my_recording_extractor, nwbfile=my_nwbfile)
    

    will result in the appropriate changes to the my_nwbfile object.

  • metadata (dict) – Metadata dictionary for constructing the NWB file. Required when nwbfile is not provided. Must include at minimum the required NWBFile fields (session_description, identifier, session_start_time).

  • overwrite (bool, default: False) – Whether to overwrite the NWBFile if one exists at the nwbfile_path. The default is False (append mode).

  • verbose (bool, default: False) – If ‘nwbfile_path’ is specified, informs user after a successful write operation.

  • unit_ids (list, optional) – Controls the unit_ids that will be written to the nwb file. If None (default), all units are written.

  • property_descriptions (dict, optional) – For each key in this dictionary which matches the name of a unit property in sorting, adds the value as a description to that custom unit column.

  • skip_properties (list of str, optional) – Each string in this list that matches a unit property will not be written to the NWBFile.

  • write_as ({‘units’, ‘processing’}) – How to save the units table in the nwb file. Options: - ‘units’ will save it to the official NWBFile.Units position; recommended only for the final form of the data. - ‘processing’ will save it to the processing module to serve as a historical provenance for the official table.

  • units_name (str, default: ‘units’) – The name of the units table. If write_as==’units’, then units_name must also be ‘units’.

  • units_description (str, default: ‘Autogenerated by neuroconv.’)

  • waveform_means (np.ndarray, optional) – Waveform mean (template) for each unit. Shape: (num_units, num_samples, num_channels).

  • waveform_sds (np.ndarray, optional) – Waveform standard deviation for each unit. Shape: (num_units, num_samples, num_channels).

  • unit_electrode_indices (list of lists of int, optional) – For each unit, a list of electrode indices corresponding to waveform data.

  • backend ({“hdf5”, “zarr”}, optional) – The type of backend to use when writing the file. If a backend_configuration is not specified, the default type will be “hdf5”. If a backend_configuration is specified, then the type will be auto-detected.

  • backend_configuration (HDF5BackendConfiguration or ZarrBackendConfiguration, optional) – The configuration model to use when configuring the datasets for this backend. To customize, call the .get_default_backend_configuration(…) method, modify the returned BackendConfiguration object, and pass that instead. Otherwise, all datasets will use default configuration settings.

  • append_on_disk_nwbfile (bool, default: False) – Whether to append to an existing NWBFile on disk. If True, the nwbfile parameter must be None. This is useful for appending data to an existing file without overwriting it.

Returns:

The NWBFile object when writing a new file or using an in-memory nwbfile. Returns None when appending to an existing file on disk (append_on_disk_nwbfile=True). Deprecated: Returning NWBFile in append mode is deprecated and will return None in or after March 2026.

Return type:

NWBFile or None

add_sorting_analyzer_to_nwbfile(sorting_analyzer: SortingAnalyzer, nwbfile: NWBFile | None = None, metadata: dict | None = None, recording: BaseRecording | None = None, unit_ids: list[str] | list[int] | None = None, skip_properties: list[str] | None = None, property_descriptions: dict | None = None, write_as: Literal['units', 'processing'] = 'units', units_name: str = 'units', units_description: str = 'Autogenerated by neuroconv.')[source]#

Convenience function to write directly a sorting analyzer object to an nwbfile.

The function adds the data of the recording and the sorting plus the following information from the sorting analyzer: - quality metrics - template mean and std - template metrics

Parameters:
  • sorting_analyzer (spikeinterface.SortingAnalyzer) – The sorting analyzer object to be written to the NWBFile.

  • nwbfile (NWBFile, optional) – If passed, this function will fill the relevant fields within the NWBFile object. E.g., calling:

    write_recording(recording=my_recording_extractor, nwbfile=my_nwbfile)
    

    will result in the appropriate changes to the my_nwbfile object. If neither ‘nwbfile_path’ nor ‘nwbfile’ are specified, an NWBFile object will be automatically generated and returned by the function.

  • metadata (dict, optional) – Metadata dictionary with information used to create the NWBFile when one does not exist or overwrite=True. The “Ecephys” section of metadata is also used to create electrodes and electrical series fields.

  • recording (BaseRecording, optional) – If the sorting_analyzer is ‘recordingless’, this argument is required to save electrode info.

  • unit_ids (list, optional) – Controls the unit_ids that will be written to the nwb file. If None (default), all units are written.

  • property_descriptions (dict, optional) – For each key in this dictionary which matches the name of a unit property in sorting, adds the value as a description to that custom unit column.

  • skip_properties (list of str, optional) – Each string in this list that matches a unit property will not be written to the NWBFile.

  • write_as ({‘units’, ‘processing’}) – How to save the units table in the nwb file. Options: - ‘units’ will save it to the official NWBFile.Units position; recommended only for the final form of the data. - ‘processing’ will save it to the processing module to serve as a historical provenance for the official table.

  • units_name (str, optional, default: ‘units’) – The name of the units table. If write_as==’units’, then units_name must also be ‘units’.

  • units_description (str, default: ‘Autogenerated by neuroconv.’)

write_sorting_analyzer_to_nwbfile(sorting_analyzer: SortingAnalyzer, nwbfile_path: Annotated[Path, PathType(path_type=file)] | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, recording: BaseRecording | None = None, verbose: bool = False, unit_ids: list[str] | list[int] | None = None, write_electrical_series: bool = False, add_electrical_series_kwargs: dict | None = None, skip_properties: list[str] | None = None, property_descriptions: dict | None = None, write_as: Literal['units', 'processing'] = 'units', units_name: str = 'units', units_description: str = 'Autogenerated by neuroconv.', *, backend: Literal['hdf5', 'zarr'] | None = None, backend_configuration: HDF5BackendConfiguration | ZarrBackendConfiguration | None = None, append_on_disk_nwbfile: bool = False) NWBFile | None[source]#

Convenience function to write directly a sorting analyzer object to an nwbfile.

The function adds the data of the recording and the sorting plus the following information from the sorting analyzer: - quality metrics - template mean and std - template metrics

Parameters:
  • sorting_analyzer (spikeinterface.SortingAnalyzer) – The sorting analyzer object to be written to the NWBFile.

  • nwbfile_path (FilePath, optional) – Path for where to write or load (if overwrite=False) the NWBFile. If not provided, only adds data to the in-memory nwbfile without writing to disk. Deprecated: Using this function without nwbfile_path is deprecated. Use add_sorting_analyzer_to_nwbfile instead.

  • nwbfile (NWBFile, optional) – If passed, this function will fill the relevant fields within the NWBFile object. E.g., calling:

    write_recording(recording=my_recording_extractor, nwbfile=my_nwbfile)
    

    will result in the appropriate changes to the my_nwbfile object.

  • metadata (dict) – Metadata dictionary for constructing the NWB file. Required when nwbfile is not provided. Must include at minimum the required NWBFile fields (session_description, identifier, session_start_time). The “Ecephys” section of metadata is also used to create electrodes and electrical series fields.

  • overwrite (bool, default: False) – Whether to overwrite the NWBFile if one exists at the nwbfile_path.

  • recording (BaseRecording, optional) – If the sorting_analyzer is ‘recordingless’, this argument is required to be passed to save electrode info.

  • verbose (bool, default: False) – If ‘nwbfile_path’ is specified, informs user after a successful write operation.

  • unit_ids (list, optional) – Controls the unit_ids that will be written to the nwb file. If None (default), all units are written.

  • write_electrical_series (bool, default: False) – If True, the recording object associated to the analyzer is written as an electrical series.

  • add_electrical_series_kwargs (dict, optional) – Keyword arguments to control the add_electrical_series() function in case write_electrical_series=True

  • property_descriptions (dict, optional) – For each key in this dictionary which matches the name of a unit property in sorting, adds the value as a description to that custom unit column.

  • skip_properties (list of str, optional) – Each string in this list that matches a unit property will not be written to the NWBFile.

  • write_as ({‘units’, ‘processing’}) – How to save the units table in the nwb file. Options: - ‘units’ will save it to the official NWBFile.Units position; recommended only for the final form of the data. - ‘processing’ will save it to the processing module to serve as a historical provenance for the official table.

  • units_name (str, default: ‘units’) – The name of the units table. If write_as==’units’, then units_name must also be ‘units’.

  • units_description (str, default: ‘Autogenerated by neuroconv.’)

  • backend ({“hdf5”, “zarr”}, optional) – The type of backend to use when writing the file. If a backend_configuration is not specified, the default type will be “hdf5”. If a backend_configuration is specified, then the type will be auto-detected.

  • backend_configuration (HDF5BackendConfiguration or ZarrBackendConfiguration, optional) – The configuration model to use when configuring the datasets for this backend. To customize, call the .get_default_backend_configuration(…) method, modify the returned BackendConfiguration object, and pass that instead. Otherwise, all datasets will use default configuration settings.

  • append_on_disk_nwbfile (bool, default: False) – Whether to append to an existing NWBFile on disk. If True, the nwbfile parameter must be None. This is useful for appending data to an existing file without overwriting it.

Returns:

nwbfile – The in-memory NWBFile object. Returns None when append_on_disk_nwbfile=True (to be implemented in or after March 2026).

Return type:

pynwb.NWBFile or None

Data chunk iterator#

class SpikeInterfaceRecordingDataChunkIterator(recording: spikeinterface.core.baserecording.BaseRecording, segment_index: int = 0, return_scaled: bool = False, buffer_gb: float | None = None, buffer_shape: tuple | None = None, chunk_mb: float | None = None, chunk_shape: tuple | None = None, display_progress: bool = False, progress_bar_class: tqdm.std.tqdm | None = None, progress_bar_options: dict | None = None)[source]#

Bases: GenericDataChunkIterator

DataChunkIterator specifically for use on RecordingExtractor objects.

Initialize an Iterable object which returns DataChunks with data and their selections on each iteration.

Parameters:
  • recording (SpikeInterfaceRecording) – The SpikeInterfaceRecording object (RecordingExtractor or BaseRecording) which handles the data access.

  • segment_index (int, optional) – The recording segment to iterate on. Defaults to 0.

  • return_scaled (bool, optional) – Whether to return the trace data in scaled units (uV, if True) or in the raw data type (if False). Defaults to False.

  • buffer_gb (float, optional) – The upper bound on size in gigabytes (GB) of each selection from the iteration. The buffer_shape will be set implicitly by this argument. Cannot be set if buffer_shape is also specified. The default is 1GB.

  • buffer_shape (tuple, optional) – Manual specification of buffer shape to return on each iteration. Must be a multiple of chunk_shape along each axis. Cannot be set if buffer_gb is also specified. The default is None.

  • chunk_mb (float, optional) – The upper bound on size in megabytes (MB) of the internal chunk for the HDF5 dataset. The chunk_shape will be set implicitly by this argument. Cannot be set if chunk_shape is also specified. The default is 10MB, as recommended by the HDF5 group. For more details, search the hdf5 documentation for “Improving IO Performance Compressed Datasets”.

  • chunk_shape (tuple, optional) – Manual specification of the internal chunk shape for the HDF5 dataset. Cannot be set if chunk_mb is also specified. The default is None.

  • display_progress (bool, optional) – Display a progress bar with iteration rate and estimated completion time.

  • progress_bar_class (dict, optional) – The progress bar class to use. Defaults to tqdm.tqdm if the TQDM package is installed.

  • progress_bar_options (dict, optional) – Dictionary of keyword arguments to be passed directly to tqdm. See tqdm/tqdm for options.