hyperspy.samfire_utils.samfire_pool module

class hyperspy.samfire_utils.samfire_pool.SamfirePool(**kwargs)

Bases: hyperspy.utils.parallel_pool.ParallelPool

Creates and manages a pool of SAMFire workers. For based on ParallelPool - either creates processes using multiprocessing, or connects and sets up ipyparallel load_balanced_view.

Ipyparallel is managed directly, but multiprocessing pool is managed via three of Queues:

  • Shared by all (master and workers) for distributing “load-balanced”

work. - Shared by all (master and workers) for sending results back to the master - Individual queues from master to each worker. For setting up and addressing individual workers in general. This one is checked with higher priority in workers.

prepare_workers()

given SAMFire object, populates the workers with the required information. In case of multiprocessing, starts worker listening to the queues.

update_parameters()

updates various worker parameters

ping_workers()

pings all workers. Stores the one-way trip time and the process_id (pid) of each worker if available

add_jobs()

adds the requested number of jobs to the queue

parse()

parses the messages, returned from the workers.

colect_results()

collects all currently available results and parses them

run()

runs the full procedure until no more pixels are left to run in the SAMFire

stop()

stops the pool, (for ipyparallel) clears the memory

setup()

sets up the ipyparallel or multiprocessing pool (collects to the client or creates the pool)

sleep()

sleeps for the specified time, by default timestep

has_pool

Boolean if the pool is available and active

Type

Bool

pool

The pool object

Type

{ipyparallel.load_balanced_view, multiprocessing.Pool}

ipython_kwargs

The dictionary with Ipyparallel connection arguments.

Type

dict

timeout

Timeout for either pool when waiting for results

Type

float

num_workers

The number of workers actually created (may be less than requested, but can’t be more)

Type

int

timestep

The timestep between “ticks” that the result queues are checked. Higher timestep means less frequent checking, which may reduce CPU load for difficult fits that take a long time to finish.

Type

float

ping

If recorded, stores one-way trip time of each worker

Type

dict

pid

If available, stores the process-id of each worker

Type

dict

Creates a ParallelPool with additional methods for SAMFire. All arguments are passed to ParallelPool

add_jobs(needed_number=None)

Adds jobs to the job queue that is consumed by the workers.

Parameters

needed_number ({None, int}) – The number of jobs to add. If None (default), adds need_pixels

collect_results(timeout=None)

Collects and parses all results, currently not processed due to being in the queue.

Parameters

timeout ({None, flaot}) – the time to wait when collecting results. If None, the default timeout is used

property need_pixels

Returns the number of pixels that should be added to the processing queue. At most is equal to the number of workers.

parse(value)

Parses the value, returned from the workers.

Parameters

value (tuple of the form (keyword, the_rest)) –

keyword currently can be one of [‘pong’, ‘Error’, ‘result’]. For each of the keywords, “the_rest” is a tuple of different elements, but generally the first one is always the worker_id that the result came from. In particular:

  • (‘pong’, (worker_id, pid, pong_time, optional_message_str))

  • (‘Error’, (worker_id, error_message_string))

  • (‘result’, (worker_id,

    pixel_index, result_dict, bool_if_result_converged))

ping_workers(timeout=None)

Pings the workers and records one-way trip time and (if available) pid of the worker.

Parameters

timeout ({None, flaot}) – the time to wait when collecting results after sending out the ping. If None, the default timeout is used

prepare_workers(samfire)

Prepares the workers for work, in case of multiprocessing starts listening

Parameters

samfire (samfire) – the SAMFire object that will be using the pool

run()

Runs the full process of adding jobs to the processing queue, listening to the results and updating SAMFire as needed. Stops when timed out or no pixels are left to run.

stop()

Stops the appropriate pool and (if ipyparallel) clears the memory and history.

update_parameters()

Updates various worker parameters.

Currently updates:
  • Optional components (that can be switched off by the worker)

  • Parameter boundaries

  • Goodness test