neuralmonkey.runners package

Submodules

neuralmonkey.runners.base_runner module

class neuralmonkey.runners.base_runner.BaseRunner(output_series: str, decoder) → None

Bases: object

decoder_data_id
get_executable(compute_losses=False, summaries=True) → neuralmonkey.runners.base_runner.Executable
loss_names
class neuralmonkey.runners.base_runner.Executable

Bases: object

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]
class neuralmonkey.runners.base_runner.ExecutionResult(outputs, losses, scalar_summaries, histogram_summaries, image_summaries)

Bases: tuple

histogram_summaries

Alias for field number 3

image_summaries

Alias for field number 4

losses

Alias for field number 1

outputs

Alias for field number 0

scalar_summaries

Alias for field number 2

neuralmonkey.runners.base_runner.collect_encoders(coder)

Collect recusively all encoders and decoders.

neuralmonkey.runners.base_runner.reduce_execution_results(execution_results: typing.List[neuralmonkey.runners.base_runner.ExecutionResult]) → neuralmonkey.runners.base_runner.ExecutionResult

Aggregate execution results into one.

neuralmonkey.runners.beamsearch_runner module

class neuralmonkey.runners.beamsearch_runner.BeamSearchExecutable(rank: int, all_encoders: typing.List[neuralmonkey.model.model_part.ModelPart], bs_outputs: typing.List[neuralmonkey.decoders.beam_search_decoder.SearchStepOutput], vocabulary: neuralmonkey.vocabulary.Vocabulary, postprocess: typing.Union[typing.Callable, NoneType]) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]
class neuralmonkey.runners.beamsearch_runner.BeamSearchRunner(output_series: str, decoder: neuralmonkey.decoders.beam_search_decoder.BeamSearchDecoder, rank: int = 1, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

decoder_data_id
get_executable(compute_losses: bool = False, summaries: bool = True) → neuralmonkey.runners.beamsearch_runner.BeamSearchExecutable
loss_names
neuralmonkey.runners.beamsearch_runner.beam_search_runner_range(output_series: str, decoder: neuralmonkey.decoders.beam_search_decoder.BeamSearchDecoder, max_rank: int = None, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → typing.List[neuralmonkey.runners.beamsearch_runner.BeamSearchRunner]

A list of beam search runners for a range of ranks from 1 to max_rank.

This means there is max_rank output series where the n-th series contains the n-th best hypothesis from the beam search.

Parameters:
  • output_series – Prefix of output series.
  • decoder – Beam search decoder shared by all runners.
  • max_rank – Maximum rank of the hypotheses.
  • postprocess – Series-level postprocess applied on output.
Returns:

List of beam search runners getting hypotheses with rank from 1 to max_rank.

neuralmonkey.runners.label_runner module

class neuralmonkey.runners.label_runner.LabelRunExecutable(all_coders, fetches, vocabulary, postprocess)

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.label_runner.LabelRunner(output_series: str, decoder: typing.Any, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

get_executable(compute_losses=False, summaries=True)
loss_names

neuralmonkey.runners.logits_runner module

A runner outputing logits or normalized distriution from a decoder.

class neuralmonkey.runners.logits_runner.LogitsExecutable(all_coders: typing.List[neuralmonkey.model.model_part.ModelPart], fetches: typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]], vocabulary: neuralmonkey.vocabulary.Vocabulary, normalize: bool = True, pick_index: int = None) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.logits_runner.LogitsRunner(output_series: str, decoder: typing.Any, normalize: bool = True, pick_index: int = None, pick_value: str = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

A runner which takes the output from decoder.decoded_logits.

The logits / normalized probabilities are outputted as tab-separates string values. If the decoder produces a list of logits (as the recurrent decoder), the tab separated arrays are separated with commas. Alternatively, we may be interested in a single distribution dimension.

get_executable(compute_losses: bool = False, summaries: bool = True) → neuralmonkey.runners.logits_runner.LogitsExecutable
loss_names

neuralmonkey.runners.perplexity_runner module

class neuralmonkey.runners.perplexity_runner.PerplexityExecutable(all_coders: typing.List[neuralmonkey.model.model_part.ModelPart], xent_op: tensorflow.python.framework.ops.Tensor) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.perplexity_runner.PerplexityRunner(output_series: str, decoder: neuralmonkey.decoders.decoder.Decoder) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

get_executable(compute_losses=False, summaries=True) → neuralmonkey.runners.perplexity_runner.PerplexityExecutable
loss_names

neuralmonkey.runners.plain_runner module

class neuralmonkey.runners.plain_runner.PlainExecutable(all_coders, fetches, vocabulary, postprocess) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.plain_runner.PlainRunner(output_series: str, decoder: typing.Any, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

A runner which takes the output from decoder.decoded.

get_executable(compute_losses=False, summaries=True)
loss_names

neuralmonkey.runners.regression_runner module

class neuralmonkey.runners.regression_runner.RegressionRunExecutable(all_coders: typing.List[neuralmonkey.model.model_part.ModelPart], fetches: typing.Dict[str, tensorflow.python.framework.ops.Tensor], postprocess: typing.Callable[[float], float] = None) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.regression_runner.RegressionRunner(output_series: str, decoder: neuralmonkey.decoders.sequence_regressor.SequenceRegressor, postprocess: typing.Callable[[float], float] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

get_executable(compute_losses: bool = False, summaries=True) → neuralmonkey.runners.base_runner.Executable
loss_names

neuralmonkey.runners.representation_runner module

A runner that prints out the input representation from an encoder.

class neuralmonkey.runners.representation_runner.RepresentationExecutable(prev_coders: typing.List[neuralmonkey.model.model_part.ModelPart], encoded: tensorflow.python.framework.ops.Tensor, used_session: int) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]
class neuralmonkey.runners.representation_runner.RepresentationRunner(output_series: str, encoder: neuralmonkey.model.model_part.ModelPart, used_session: int = 0) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

Runner printing out representation from a encoder.

Using this runner is the way how to get input / other data representation out from Neural Monkey.

get_executable(compute_losses=False, summaries=True) → neuralmonkey.runners.representation_runner.RepresentationExecutable
loss_names

neuralmonkey.runners.rnn_runner module

Running of a recurrent decoder.

This module aggragates what is necessary to run efficiently a recurrent decoder. Unlike the default runner which assumes all outputs are independent on each other, this one does not make any of these assumptions. It implements model ensembling and beam search.

The TensorFlow session is invoked for every single output of the decoder separately which allows ensembling from all sessions and do the beam pruning before the a next output is emmited.

class neuralmonkey.runners.rnn_runner.BeamBatch(decoded, logprobs)

Bases: tuple

decoded

Alias for field number 0

logprobs

Alias for field number 1

class neuralmonkey.runners.rnn_runner.ExpandedBeamBatch(beam_batch, next_logprobs)

Bases: tuple

beam_batch

Alias for field number 0

next_logprobs

Alias for field number 1

class neuralmonkey.runners.rnn_runner.RuntimeRnnExecutable(all_coders, decoder, initial_fetches, vocabulary, beam_scoring_f, postprocess, beam_size=1, compute_loss=True)

Bases: neuralmonkey.runners.base_runner.Executable

Run and ensemble the RNN decoder step by step.

collect_results(results: typing.List[typing.Dict]) → None

Process what the TF session returned.

Only a single time step is always processed at once. First, distributions from all sessions are aggregated.

next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

It takes a beam batch that should be expanded the next and preprare an additional feed_dict based on the hypotheses history.

class neuralmonkey.runners.rnn_runner.RuntimeRnnRunner(output_series: str, decoder, beam_size: int = 1, beam_scoring_f=<function likelihood_beam_score>, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

Prepare running the RNN decoder step by step.

get_executable(compute_losses=False, summaries=True)
loss_names
neuralmonkey.runners.rnn_runner.likelihood_beam_score(decoded, logprobs)

Score the beam by normalized probaility.

neuralmonkey.runners.rnn_runner.n_best(n: int, expanded: typing.List[neuralmonkey.runners.rnn_runner.ExpandedBeamBatch], scoring_function) → typing.List[neuralmonkey.runners.rnn_runner.BeamBatch]

Take n-best from expanded beam search hypotheses.

To do the scoring we need to “reshape” the hypohteses. Before the scoring the hypothesis are split into beam batches by their position in the beam. To do the scoring, however, they need to be organized by the instances. After the scoring, only _n_ hypotheses is kept for each isntance. These are again split by their position in the beam.

Parameters:
  • n – Beam size.
  • expanded – List of batched expanded hypotheses.
  • scoring_function – A function
Returns:

List of BeamBatches ready for new expansion.

neuralmonkey.runners.runner module

class neuralmonkey.runners.runner.GreedyRunExecutable(all_coders, fetches, vocabulary, postprocess) → None

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

class neuralmonkey.runners.runner.GreedyRunner(output_series: str, decoder: typing.Any, postprocess: typing.Callable[[typing.List[str]], typing.List[str]] = None) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

get_executable(compute_losses=False, summaries=True)
loss_names

neuralmonkey.runners.word_alignment_runner module

class neuralmonkey.runners.word_alignment_runner.WordAlignmentRunner(output_series: str, encoder: neuralmonkey.model.model_part.ModelPart, decoder: neuralmonkey.decoders.decoder.Decoder) → None

Bases: neuralmonkey.runners.base_runner.BaseRunner

get_executable(compute_losses=False, summaries=True)
loss_names
class neuralmonkey.runners.word_alignment_runner.WordAlignmentRunnerExecutable(all_coders, fetches)

Bases: neuralmonkey.runners.base_runner.Executable

collect_results(results: typing.List[typing.Dict]) → None
next_to_execute() → typing.Tuple[typing.List[typing.Any], typing.Union[typing.Dict, typing.List], typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Union[int, float, numpy.ndarray]]]

Get the feedables and tensors to run.

Module contents