pyecsca.misc.utils module

Just some utilities I promise.

pexec(s)[source]

Parse with exec.

peval(s)[source]

Parse with eval.

in_notebook()[source]

Test whether we are executing in Jupyter notebook.

Return type:

bool

log(*args, **kwargs)[source]

Log a message.

warn(*args, **kwargs)[source]

Log a message.

silent()[source]

Temporarily disable output.

class TaskExecutor(*args, **kwargs)[source]

Bases: ProcessPoolExecutor

A simple ProcessPoolExecutor that keeps tracks of tasks that were submitted to it.

Initializes a new ProcessPoolExecutor instance.

Args:
max_workers: The maximum number of processes that can be used to

execute the given calls. If None or not given then as many worker processes will be created as the machine has processors.

mp_context: A multiprocessing context to launch the workers. This

object should provide SimpleQueue, Queue and Process. Useful to allow specific multiprocessing start methods.

initializer: A callable used to initialize worker processes. initargs: A tuple of arguments to pass to the initializer. max_tasks_per_child: The maximum number of tasks a worker process

can complete before it will exit and be replaced with a fresh worker process. The default of None means worker process will live as long as the executor. Requires a non-‘fork’ mp_context start method. When given, we default to using ‘spawn’ if no mp_context is supplied.

keys: List[Any][source]

A list of keys that identify the futures.

map(fn, *iterables, timeout=None, chunksize=1)[source]

Returns an iterator equivalent to map(fn, iter).

Args:
fn: A callable that will take as many arguments as there are

passed iterables.

timeout: The maximum number of seconds to wait. If None, then there

is no limit on the wait time.

chunksize: If greater than one, the iterables will be chopped into

chunks of size chunksize and submitted to the process pool. If set to one, the items in the list will be sent one at a time.

Returns:

An iterator equivalent to: map(func, *iterables) but the calls may be evaluated out-of-order.

Raises:
TimeoutError: If the entire result iterator could not be generated

before the given timeout.

Exception: If fn(*args) raises for any values.

shutdown(wait=True, *, cancel_futures=False)[source]

Clean-up the resources associated with the Executor.

It is safe to call this method several times. Otherwise, no other methods can be called after this one.

Args:
wait: If True then shutdown will not return until all running

futures have finished executing and the resources used by the executor have been reclaimed.

cancel_futures: If True then shutdown will cancel all pending

futures. Futures that are completed or running will not be cancelled.

submit(fn, /, *args, **kwargs)[source]

Submits a callable to be executed with the given arguments.

Schedules the callable to be executed as fn(*args, **kwargs) and returns a Future instance representing the execution of the callable.

Returns:

A Future representing the given call.

futures: List[Future][source]

A list of futures submitted to the executor.

submit_task(key, fn, /, *args, **kwargs)[source]

Submit a task (function fn), identified by key and with args and kwargs.

property tasks[source]

A list of tasks that were submitted to this executor.

as_completed()[source]

Like concurrent.futures.as_completed, but yields a pair of key and future.

Return type:

Generator[tuple[Any, Future], Any, None]