hyperspy.io_plugins.bruker module

class hyperspy.io_plugins.bruker.BCF_reader(filename, instrument=None)

Bases: SFS_reader

Class to read bcf (Bruker hypermapping) file.

Inherits SFS_reader and all its attributes and methods.

Attributes: filename

Methods: check_index_valid, parse_hypermap

The class instantiates HyperHeader class as self.header attribute where all metadata, sum eds spectras, (SEM) images are stored.


hypy helper method


check and return if index is valid

parse_hypermap(index=None, downsample=1, cutoff_at_kV=None, lazy=False)

Unpack the Delphi/Bruker binary spectral map and return numpy array in memory efficient way.

Pure python/numpy implementation – slow, or cython/memoryview/numpy implimentation if compilied and present (fast) is used.

  • index (None or int) – The index of hypermap in bcf if there is more than one hyper map in file.

  • downsample (int) – Downsampling factor. Differently than block_reduce from skimage.measure, the parser populates reduced array by suming results of pixels, thus having lower memory requiriments. Default is 1.

  • cutoff_at_kV (None, float, int or str) –

    Value or method to truncate the array at energy in kV. Helps reducing size of the returned array. Default value is None (does not truncate). Numerical value should be in kV. Two methods for automatic cutoff is available:

    ”zealous” - truncates to the last non zero channel (should not be used for stacks). “auto” - truncates to hv of electron microscope (good for stacks if hv is consistent).

  • lazy (bool) – It True, returns dask.array otherwise a numpy.array. Default is False.


result – Bruker hypermap, with (y,x,E) shape.

Return type:

numpy.ndarray or dask.array.array

class hyperspy.io_plugins.bruker.HyperHeader(xml_str, indexes, instrument=None)

Bases: object

Wrap Bruker HyperMaping xml header into python object.

Arguments: xml_str – the uncompressed to be provided with extracted Header xml from bcf. indexes – list of indexes of available datasets

Methods: estimate_map_channels, estimate_map_depth

If Bcf is version 2, the bcf can contain stacks of hypermaps - thus header part can contain multiply sum eds spectras and it’s metadata per hypermap slice which can be selected using index. Bcf can record number of images from different single dimentional value detectors (BSE, SEI, ARGUS, etc…). images representing signals are internaly ordered and right signal image can be accessed using image index (do not confuse with dataset index).

_parse_image(xml_node, overview=False)

parse image from bruker xml image node.


wrap objectified xml part with selection of elements to self.elements list


Wrap objectified xml part with image to class attributes for self.image.


set microscope metadata from objectified xml part (TRTSEMData, TRTSEMStageData, TRTDSPConfiguration).

BCF can contain basic parameters of SEM column, and optionaly the stage. This metadata can be not fully or at all availbale to Esprit and saved into bcf file as it depends from license and the link and implementation state between the microscope’s software and Bruker system.


calculate and return real time for whole hypermap in seconds

estimate_map_depth(index=0, downsample=1, for_numpy=False)

Estimate minimal dtype of array using cumulative spectra of the all pixels so that no data would be truncated.

The method estimates the value from sum eds spectra, dividing the maximum energy pulse value from raster x and y and to be on the safe side multiplying by 2.

  • index (int) – Index of the hypermap if multiply hypermaps are present in the same bcf. (default 0)

  • downsample (int) – Downsample factor. (default 1)

  • for_numpy (bool) – If False produce unsigned, otherwise signed types: if hypermap will be loaded using the pure python function where numpy’s inplace integer addition will be used, the dtype should be signed; If cython implementation will be used (default), then any returned dtypes can be safely unsigned. (default False)


depth – numpy dtype large enought to use in final hypermap numpy array.

Return type:


get_acq_instrument_dict(detector=False, **kwargs)

return python dictionary with aquisition instrument mandatory data


Estimate consistent minimal size of energy axis by comparing energy at last recorded channel vs electron beam potential and return channel number corresponding to least energy. This method is safe to use with sliced datasets (consistent between slices) which were acquired using the same electron potential.


index (int) – Index of the map if multiple hypermaps are present in the same bcf.

Return type:

optimal channel number


return objectified xml with spectra metadata Arguments: index – index of hypermap/spectra (default 0)

class hyperspy.io_plugins.bruker.SFSTreeItem(item_raw_string, parent)

Bases: object

Class to manage one internal sfs file.

Reading, reading in chunks, reading and extracting, reading without extracting even if compression is pressent.

Attributes: item_raw_string – the bytes from sfs file table describing the file parent – the item higher hierarchicaly in the sfs file tree

Methods: read_piece, setup_compression_metadata, get_iter_and_properties, get_as_BytesIO_string


Return recalculated windows filetime to unix time.


Parse the sfs and populate self.pointers table.

self.pointer is the sfs pointer table containing addresses of every chunk of the file.

The pointer table if the file is big can extend throught many sfs chunks. Differently than files, the pointer table of file have no table of pointers to the chunks. Instead if pointer table is larger than sfs chunk, the chunk header contains next chunk number (address can be calculated using known chunk size and global offset) with continuation of file pointer table, thus it have to be read and filled consecutive.


Generate and return iterator for reading and returning sfs internal file in chunks.

By default it creates iterator for whole file, however with kwargs ‘first’ and ‘chunks’ the range of chunks for iterator can be set.

Keyword arguments: first – the index of first chunk from which to read. (default 0) chunks – the number of chunks to read. (default False)


Generate and return reader and decompressor iterator for compressed with zlib compression sfs internal file.

Returns: iterator of decompressed data chunks.


Get the whole file as io.BytesIO object (in memory!).


Generate and return the iterator of data chunks and properties of such chunks such as size and count.

Method detects if data is compressed and uses iterator with decompression involved, else uses simple iterator of chunks.


(iterator, chunk_size, number_of_chunks)

read_piece(offset, length)

Read and returns raw byte string of the file without applying any decompression.

Arguments: offset: seek value length: length of the data counting from the offset

Returns: io.ByteIO object


parse and setup the number of compression chunks

and uncompressed chunk size as class attributes.

Sets up attributes: self.uncompressed_blk_size, self.no_of_compr_blk

class hyperspy.io_plugins.bruker.SFS_reader(filename)

Bases: object

Class to read sfs file. SFS is AidAim software’s(tm) single file system. The class provides basic reading capabilities of such container. It is capable to read compressed data in zlib, but SFS can contain other compression which is not implemented here. It is also not able to read encrypted sfs containers.

This class can be used stand alone or inherited in construction of file readers using sfs technolgy.


parse, check and setup the self.compression

_flat_items_to_dict(paths, temp_item_list)

place items from flat list into dictionary tree of virtual file system


Setup the virtual file system tree represented as python dictionary with values populated with SFSTreeItem instances

See also



Return the SFSTreeItem (aka internal file) object from sfs container.


path (str) – Internal file path in sfs file tree. Path accepts only standard - forward slash for directories.


object – SFSTreeItem, which can be read into byte stream, in chunks or whole using objects methods.

Return type:



to get “file” object ‘kitten.png’ in folder ‘catz’ which resides in root directory of sfs, you would use:

>>> instance_of_SFSReader.get_file('catz/kitten.png')

See also


hyperspy.io_plugins.bruker.bcf_hyperspectra(obj_bcf, index=None, downsample=None, cutoff_at_kV=None, lazy=False)

Return hyperspy required list of dict with eds hyperspectra and metadata.


return hyperspy required list of dict with sem images and metadata.

hyperspy.io_plugins.bruker.bcf_reader(filename, select_type=None, index=None, downsample=1, cutoff_at_kV=None, instrument=None, lazy=False)

Reads a bruker bcf file and loads the data into the appropriate class, then wraps it into appropriate hyperspy required list of dictionaries used by hyperspy.api.load() method.

  • select_type (str or None) – One of: spectrum_image, image. If none specified, then function loads everything, else if specified, loads either just sem imagery, or just hyper spectral mapping data (default None).

  • index (int, None or str) – Index of dataset in bcf v2 can be None integer and ‘all’ (default None); None will select first available mapping if more than one. ‘all’ will return all maps if more than one present; integer will return only selected map.

  • downsample (int) – the downsample ratio of hyperspectral array (downsampling height and width only), can be integer from 1 to inf, where ‘1’ means no downsampling will be applied. (default 1)

  • cutoff_at_kV (int, float or None) – if set (can be int of float >= 0) can be used either, to crop or enlarge energy range at max values. (default None)

  • instrument (str or None) – Can be either ‘TEM’ or ‘SEM’. Default is None.


there is no way to determine what kind of instrument was used from metadata: TEM or SEM. However simple guess can be made using the acceleration voltage, assuming that SEM is <= 30kV or TEM is >30kV


interpret any string and return casted to appropriate dtype python object


standardize line describtion.

Bruker saves line description in all caps and omits the type if only one exists instead of using alfa

hyperspy.io_plugins.bruker.py_parse_hypermap(virtual_file, shape, dtype, downsample=1)

Unpack the Delphi/Bruker binary spectral map and return numpy array in memory efficient way using pure python implementation. (Slow!)

The function is long and complicated due to complexity of Delphi packed array. Whole parsing is placed in one function to reduce overhead of python function calls. For cleaner parsing logic, please, see fast cython implementation at hyperspy/io_plugins/unbcf_fast.pyx

The method is only meant to be used if for some reason c (generated with cython) version of the parser is not compiled.

  • instance (virtual_file -- virtual file handle returned by SFS_reader) – or by object inheriting it (e.g. BCF_reader instance)

  • shape (shape -- numpy) –

  • dtype (dtype -- numpy) –

  • factor (downsample -- downsample) –

  • note! (downsample, shape and dtype are interconnected and needs) –

  • segfault (to be properly calculated otherwise wrong output or) –

  • expected (is) –

Return type:

numpy array of bruker hypermap, with (y, x, E) shape.