hyperspy.api.samfire
#
|
Creates and manages a pool of SAMFire workers. |
SAMFire modules
The samfire
module contains the following submodules:
- fit_tests
Tests to check fit convergence when running SAMFire
- global_strategies
Available global strategies to use in SAMFire
- local_strategies
Available global strategies to use in SAMFire
- SamfirePool
The parallel pool, customized to run SAMFire.
- class hyperspy.api.samfire.SamfirePool(**kwargs)#
Bases:
ParallelPool
Creates and manages a pool of SAMFire workers. For based on ParallelPool - either creates processes using multiprocessing, or connects and sets up ipyparallel load_balanced_view.
Ipyparallel is managed directly, but multiprocessing pool is managed via three of queues:
Shared by all (master and workers) for distributing “load-balanced” work.
Shared by all (master and workers) for sending results back to the master
Individual queues from master to each worker. For setting up and addressing individual workers in general. This one is checked with higher priority in workers.
- Attributes:
has_pool
boolReturns
True
if the pool is ready and set-up elseFalse
.- pool
ipyparallel.LoadBalancedView
ormultiprocessing.pool.Pool
The pool object
- ipython_kwargs
dict
The dictionary with Ipyparallel connection arguments.
- timeout
float
Timeout for either pool when waiting for results
- num_workers
int
The number of workers actually created (may be less than requested, but can’t be more)
- timestep
float
The timestep between “ticks” that the result queues are checked. Higher timestep means less frequent checking, which may reduce CPU load for difficult fits that take a long time to finish.
- ping
dict
If recorded, stores one-way trip time of each worker
- pid
dict
If available, stores the process-id of each worker
Creates a ParallelPool with additional methods for SAMFire. All arguments are passed to ParallelPool
- add_jobs(needed_number=None)#
Adds jobs to the job queue that is consumed by the workers.
- Parameters:
- needed_number: {None, int}
The number of jobs to add. If None (default), adds need_pixels
- collect_results(timeout=None)#
Collects and parses all results, currently not processed due to being in the queue.
- Parameters:
- timeout: {None, flaot}
the time to wait when collecting results. If None, the default timeout is used
- property need_pixels#
Returns the number of pixels that should be added to the processing queue. At most is equal to the number of workers.
- parse(value)#
Parse the value returned from the workers.
- Parameters:
- value: tuple of the form (keyword, the_rest)
Keyword currently can be one of [‘pong’, ‘Error’, ‘result’]. For each of the keywords, “the_rest” is a tuple of different elements, but generally the first one is always the worker_id that the result came from. In particular:
(‘pong’, (worker_id, pid, pong_time, optional_message_str))
(‘Error’, (worker_id, error_message_string))
(‘result’, (worker_id, pixel_index, result_dict, bool_if_result_converged))
- ping_workers(timeout=None)#
Ping the workers and record one-way trip time and the process_id pid of each worker if available.
- Parameters:
- timeout: {None, flaot}
the time to wait when collecting results after sending out the ping. If None, the default timeout is used
- prepare_workers(samfire)#
Given SAMFire object, populate the workers with the required information. In case of multiprocessing, start worker listening to the queues.
- Parameters:
- samfire
Samfire
The SAMFire object that will be using the pool.
- samfire
- run()#
Run the full process of adding jobs to the processing queue, listening to the results and updating SAMFire as needed. Stops when timed out or no pixels are left to run.
Run the full procedure until no more pixels are left to run in the SAMFire.
- stop()#
Stops the appropriate pool and (if ipyparallel) clears the memory and history.
- update_parameters()#
Updates various worker parameters.
- Currently updates:
Optional components (that can be switched off by the worker)
Parameter boundaries
Goodness test