hyperspy.io_plugins.bruker module

class hyperspy.io_plugins.bruker.BCF_reader(filename, instrument=None)

Bases: hyperspy.io_plugins.bruker.SFS_reader

Class to read bcf (Bruker hypermapping) file.

Inherits SFS_reader and all its attributes and methods.

Attributes: filename

Methods: check_index_valid, parse_hypermap

The class instantiates HyperHeader class as self.header attribute where all metadata, sum eds spectras, (SEM) images are stored.

add_filename_to_general(item)

hypy helper method

check_index_valid(index)

check and return if index is valid

parse_hypermap(index=None, downsample=1, cutoff_at_kV=None, lazy=False)

Unpack the Delphi/Bruker binary spectral map and return numpy array in memory efficient way.

Pure python/numpy implementation – slow, or cython/memoryview/numpy implimentation if compilied and present (fast) is used.

Parameters
  • index (None or int) – The index of hypermap in bcf if there is more than one hyper map in file.

  • downsample (int) – Downsampling factor. Differently than block_reduce from skimage.measure, the parser populates reduced array by suming results of pixels, thus having lower memory requiriments. Default is 1.

  • cutoff_at_kV (None or float) – Value in keV to truncate the array at. Helps reducing size of array. Default is None.

  • lazy (bool) – It True, returns dask.array otherwise a numpy.array. Default is False.

Returns

result – Bruker hypermap, with (y,x,E) shape.

Return type

numpy.ndarray or dask.array.array

class hyperspy.io_plugins.bruker.HyperHeader(xml_str, indexes, instrument=None)

Bases: object

Wrap Bruker HyperMaping xml header into python object.

Arguments: xml_str – the uncompressed to be provided with extracted Header xml from bcf. indexes – list of indexes of available datasets

Methods: estimate_map_channels, estimate_map_depth

If Bcf is version 2, the bcf can contain stacks of hypermaps - thus header part can contain multiply sum eds spectras and it’s metadata per hypermap slice which can be selected using index. Bcf can record number of imagery from different imagining detectors (BSE, SEI, ARGUS, etc…): access to imagery is throught image index.

_parse_image(xml_node, overview=False)

parse image from bruker xml image node.

_set_elements(root)

wrap objectified xml part with selection of elements to self.elements list

_set_images(root)

Wrap objectified xml part with image to class attributes for self.image.

_set_microscope(root)

set microscope metadata from objectified xml part (TRTSEMData, TRTSEMStageData, TRTDSPConfiguration).

BCF can contain basic parameters of SEM column, and optionaly the stage. This metadata can be not fully or at all availbale to Esprit and saved into bcf file as it depends from license and the link and implementation state between the microscope’s software and Bruker system.

calc_real_time()

calculate and return real time for whole hypermap in seconds

estimate_map_channels(index=0)

Estimate minimal size of energy axis so any spectra from any pixel would not be truncated.

Parameters

index (int) – Index of the map if multiply hypermaps are present in the same bcf.

Returns

Return type

optimal channel number

estimate_map_depth(index=0, downsample=1, for_numpy=False)

Estimate minimal dtype of array using cumulative spectra of the all pixels so that no data would be truncated.

The method estimates the value from sum eds spectra, dividing the maximum energy pulse value from raster x and y and to be on the safe side multiplying by 2.

Parameters
  • index (int) – Index of the hypermap if multiply hypermaps are present in the same bcf. (default 0)

  • downsample (int) – Downsample factor. (default 1)

  • for_numpy (bool) – If False produce unsigned, otherwise signed types: if hypermap will be loaded using the pure python function where numpy’s inplace integer addition will be used, the dtype should be signed; If cython implementation will be used (default), then any returned dtypes can be safely unsigned. (default False)

Returns

depth – numpy dtype large enought to use in final hypermap numpy array.

Return type

numpy.dtype

get_acq_instrument_dict(detector=False, **kwargs)

return python dictionary with aquisition instrument mandatory data

get_spectra_metadata(index=0)

return objectified xml with spectra metadata Arguments: index – index of hypermap/spectra (default 0)

class hyperspy.io_plugins.bruker.SFSTreeItem(item_raw_string, parent)

Bases: object

Class to manage one internal sfs file.

Reading, reading in chunks, reading and extracting, reading without extracting even if compression is pressent.

Attributes: item_raw_string – the bytes from sfs file table describing the file parent – the item higher hierarchicaly in the sfs file tree

Methods: read_piece, setup_compression_metadata, get_iter_and_properties, get_as_BytesIO_string

_filetime_to_unix(time)

Return recalculated windows filetime to unix time.

_fill_pointer_table()

Parse the sfs and populate self.pointers table.

self.pointer is the sfs pointer table containing addresses of every chunk of the file.

The pointer table if the file is big can extend throught many sfs chunks. Differently than files, the pointer table of file have no table of pointers to the chunks. Instead if pointer table is larger than sfs chunk, the chunk header contains next chunk number (address can be calculated using known chunk size and global offset) with continuation of file pointer table, thus it have to be read and filled consecutive.

_iter_read_chunks(first=0)

Generate and return iterator for reading and returning sfs internal file in chunks.

By default it creates iterator for whole file, however with kwargs ‘first’ and ‘chunks’ the range of chunks for iterator can be set.

Keyword arguments: first – the index of first chunk from which to read. (default 0) chunks – the number of chunks to read. (default False)

_iter_read_compr_chunks()

Generate and return reader and decompressor iterator for compressed with zlib compression sfs internal file.

Returns: iterator of decompressed data chunks.

get_as_BytesIO_string()

Get the whole file as io.BytesIO object (in memory!).

get_iter_and_properties()

Generate and return the iterator of data chunks and properties of such chunks such as size and count.

Method detects if data is compressed and uses iterator with decompression involved, else uses simple iterator of chunks.

Returns

(iterator, chunk_size, number_of_chunks)

read_piece(offset, length)

Read and returns raw byte string of the file without applying any decompression.

Arguments: offset: seek value length: length of the data counting from the offset

Returns: io.ByteIO object

setup_compression_metadata()

parse and setup the number of compression chunks

and uncompressed chunk size as class attributes.

Sets up attributes: self.uncompressed_blk_size, self.no_of_compr_blk

class hyperspy.io_plugins.bruker.SFS_reader(filename)

Bases: object

Class to read sfs file. SFS is AidAim software’s(tm) single file system. The class provides basic reading capabilities of such container. It is capable to read compressed data in zlib, but SFS can contain other compression which is not implemented here. It is also not able to read encrypted sfs containers.

This class can be used stand alone or inherited in construction of file readers using sfs technolgy.

filename
_check_the_compresion(temp_item_list)

parse, check and setup the self.compression

_flat_items_to_dict(paths, temp_item_list)

place items from flat list into dictionary tree of virtual file system

_setup_vfs()

Setup the virtual file system tree represented as python dictionary with values populated with SFSTreeItem instances

See also

SFSTreeItem

get_file(path)

Return the SFSTreeItem (aka internal file) object from sfs container.

Parameters

path (str) – Internal file path in sfs file tree. Path accepts only standard - forward slash for directories.

Returns

object – SFSTreeItem, which can be read into byte stream, in chunks or whole using objects methods.

Return type

SFSTreeItem

Example

to get “file” object ‘kitten.png’ in folder ‘catz’ which resides in root directory of sfs, you would use:

>>> instance_of_SFSReader.get_file('catz/kitten.png')

See also

SFSTreeItem

hyperspy.io_plugins.bruker.bcf_hyperspectra(obj_bcf, index=None, downsample=None, cutoff_at_kV=None, lazy=False)

Return hyperspy required list of dict with eds hyperspectra and metadata.

hyperspy.io_plugins.bruker.bcf_images(obj_bcf)

return hyperspy required list of dict with sem images and metadata.

hyperspy.io_plugins.bruker.bcf_reader(filename, select_type=None, index=None, downsample=1, cutoff_at_kV=None, instrument=None, lazy=False)

Reads a bruker bcf file and loads the data into the appropriate class, then wraps it into appropriate hyperspy required list of dictionaries used by hyperspy.api.load() method.

Parameters
  • select_type (str or None) – One of: spectrum_image, image. If none specified, then function loads everything, else if specified, loads either just sem imagery, or just hyper spectral mapping data (default None).

  • index (int, None or str) – Index of dataset in bcf v2 can be None integer and ‘all’ (default None); None will select first available mapping if more than one. ‘all’ will return all maps if more than one present; integer will return only selected map.

  • downsample (int) – the downsample ratio of hyperspectral array (downsampling height and width only), can be integer from 1 to inf, where ‘1’ means no downsampling will be applied. (default 1)

  • cutoff_at_kV (int, float or None) – if set (can be int of float >= 0) can be used either, to crop or enlarge energy range at max values. (default None)

  • instrument (str or None) – Can be either ‘TEM’ or ‘SEM’. Default is None.

hyperspy.io_plugins.bruker.guess_mode(hv)

there is no way to determine what kind of instrument was used from metadata: TEM or SEM. However simple guess can be made using the acceleration voltage, assuming that SEM is <= 30kV or TEM is >30kV

hyperspy.io_plugins.bruker.interpret(string)

interpret any string and return casted to appropriate dtype python object

hyperspy.io_plugins.bruker.parse_line(line_string)

standardize line describtion.

Bruker saves line description in all caps and omits the type if only one exists instead of using alfa

hyperspy.io_plugins.bruker.py_parse_hypermap(virtual_file, shape, dtype, downsample=1)

Unpack the Delphi/Bruker binary spectral map and return numpy array in memory efficient way using pure python implementation. (Slow!)

The function is long and complicated due to complexity of Delphi packed array. Whole parsing is placed in one function to reduce overhead of python function calls. For cleaner parsing logic, please, see fast cython implementation at hyperspy/io_plugins/unbcf_fast.pyx

The method is only meant to be used if for some reason c (generated with cython) version of the parser is not compiled.

Parameters
  • instance (virtual_file -- virtual file handle returned by SFS_reader) – or by object inheriting it (e.g. BCF_reader instance)

  • shape (shape -- numpy) –

  • dtype (dtype -- numpy) –

  • factor (downsample -- downsample) –

  • note! (downsample, shape and dtype are interconnected and needs) –

  • segfault (to be properly calculated otherwise wrong output or) –

  • expected (is) –

Returns

Return type

numpy array of bruker hypermap, with (y, x, E) shape.