neuralmonkey.runners package¶
Submodules¶
neuralmonkey.runners.base_runner module¶
-
class
neuralmonkey.runners.base_runner.
BaseRunner
(output_series: str, decoder) → None¶ Bases:
object
-
decoder_data_id
¶
-
get_executable
(compute_losses=False, summaries=True) → neuralmonkey.runners.base_runner.Executable¶
-
loss_names
¶
-
-
class
neuralmonkey.runners.base_runner.
Executable
¶ Bases:
object
-
collect_results
(results: typing.List[typing.Dict]) → None¶
-
next_to_execute
() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]¶
-
-
class
neuralmonkey.runners.base_runner.
ExecutionResult
(outputs, losses, scalar_summaries, histogram_summaries, image_summaries)¶ Bases:
tuple
-
histogram_summaries
¶ Alias for field number 3
-
image_summaries
¶ Alias for field number 4
-
losses
¶ Alias for field number 1
-
outputs
¶ Alias for field number 0
-
scalar_summaries
¶ Alias for field number 2
-
-
neuralmonkey.runners.base_runner.
collect_encoders
(coder)¶ Collect recusively all encoders and decoders.
-
neuralmonkey.runners.base_runner.
reduce_execution_results
(execution_results: typing.List[neuralmonkey.runners.base_runner.ExecutionResult]) → neuralmonkey.runners.base_runner.ExecutionResult¶ Aggregate execution results into one.
neuralmonkey.runners.label_runner module¶
-
class
neuralmonkey.runners.label_runner.
LabelRunExecutable
(all_coders, fetches, vocabulary, postprocess)¶ Bases:
neuralmonkey.runners.base_runner.Executable
-
collect_results
(results: typing.List[typing.Dict]) → None¶
-
next_to_execute
() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]¶ Get the feedables and tensors to run.
-
-
class
neuralmonkey.runners.label_runner.
LabelRunner
(output_series: str, decoder: typing.Any, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None¶ Bases:
neuralmonkey.runners.base_runner.BaseRunner
-
get_executable
(compute_losses=False, summaries=True)¶
-
loss_names
¶
-
neuralmonkey.runners.rnn_runner module¶
Running of a recurrent decoder.
This module aggragates what is necessary to run efficiently a recurrent decoder. Unlike the default runner which assumes all outputs are independent on each other, this one does not make any of these assumptions. It implements model ensembling and beam search.
The TensorFlow session is invoked for every single output of the decoder separately which allows ensembling from all sessions and do the beam pruning before the a next output is emmited.
-
class
neuralmonkey.runners.rnn_runner.
BeamBatch
(decoded, logprobs)¶ Bases:
tuple
-
decoded
¶ Alias for field number 0
-
logprobs
¶ Alias for field number 1
-
-
class
neuralmonkey.runners.rnn_runner.
ExpandedBeamBatch
(beam_batch, next_logprobs)¶ Bases:
tuple
-
beam_batch
¶ Alias for field number 0
-
next_logprobs
¶ Alias for field number 1
-
-
class
neuralmonkey.runners.rnn_runner.
RuntimeRnnExecutable
(all_coders, decoder, initial_fetches, vocabulary, beam_scoring_f, postprocess, beam_size=1, compute_loss=True)¶ Bases:
neuralmonkey.runners.base_runner.Executable
Run and ensemble the RNN decoder step by step.
-
collect_results
(results: typing.List[typing.Dict]) → None¶ Process what the TF session returned.
Only a single time step is always processed at once. First, distributions from all sessions are aggregated.
-
next_to_execute
() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]¶ Get the feedables and tensors to run.
It takes a beam batch that should be expanded the next and preprare an additional feed_dict based on the hypotheses history.
-
-
class
neuralmonkey.runners.rnn_runner.
RuntimeRnnRunner
(output_series: str, decoder, beam_size: int = 1, beam_scoring_f=<function likelihood_beam_score>, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None¶ Bases:
neuralmonkey.runners.base_runner.BaseRunner
Prepare running the RNN decoder step by step.
-
get_executable
(compute_losses=False, summaries=True)¶
-
loss_names
¶
-
-
neuralmonkey.runners.rnn_runner.
likelihood_beam_score
(decoded, logprobs)¶ Score the beam by normalized probaility.
-
neuralmonkey.runners.rnn_runner.
n_best
(n: int, expanded: typing.List[neuralmonkey.runners.rnn_runner.ExpandedBeamBatch], scoring_function) → typing.List[neuralmonkey.runners.rnn_runner.BeamBatch]¶ Take n-best from expanded beam search hypotheses.
To do the scoring we need to “reshape” the hypohteses. Before the scoring the hypothesis are split into beam batches by their position in the beam. To do the scoring, however, they need to be organized by the instances. After the scoring, only _n_ hypotheses is kept for each isntance. These are again split by their position in the beam.
Parameters: - n – Beam size.
- expanded – List of batched expanded hypotheses.
- scoring_function – A function
Returns: List of BeamBatches ready for new expansion.
neuralmonkey.runners.runner module¶
-
class
neuralmonkey.runners.runner.
GreedyRunExecutable
(all_coders, fetches, vocabulary, postprocess)¶ Bases:
neuralmonkey.runners.base_runner.Executable
-
collect_results
(results: typing.List[typing.Dict]) → None¶
-
next_to_execute
() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]¶ Get the feedables and tensors to run.
-
-
class
neuralmonkey.runners.runner.
GreedyRunner
(output_series: str, decoder: typing.Any, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None¶ Bases:
neuralmonkey.runners.base_runner.BaseRunner
-
get_executable
(compute_losses=False, summaries=True)¶
-
loss_names
¶
-
neuralmonkey.runners.word_alignment_runner module¶
-
class
neuralmonkey.runners.word_alignment_runner.
WordAlignmentRunner
(output_series: str, encoder: neuralmonkey.model.model_part.ModelPart, decoder: neuralmonkey.decoders.decoder.Decoder) → None¶ Bases:
neuralmonkey.runners.base_runner.BaseRunner
-
get_executable
(compute_losses=False, summaries=True)¶
-
loss_names
¶
-
-
class
neuralmonkey.runners.word_alignment_runner.
WordAlignmentRunnerExecutable
(all_coders, fetches)¶ Bases:
neuralmonkey.runners.base_runner.Executable
-
collect_results
(results: typing.List[typing.Dict]) → None¶
-
next_to_execute
() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]¶ Get the feedables and tensors to run.
-