neuralmonkey.decoders package

Submodules

neuralmonkey.decoders.beam_search_decoder module

class neuralmonkey.decoders.beam_search_decoder.BeamSearchDecoder(name: str, parent_decoder: neuralmonkey.decoders.decoder.Decoder, beam_size: int, length_normalization: float, max_steps: int = None, save_checkpoint: str = None, load_checkpoint: str = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

In-graph beam search for batch size 1.

The hypothesis scoring algorithm is taken from https://arxiv.org/pdf/1609.08144.pdf. Length normalization is parameter alpha from equation 14.

beam_size
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]

Populate the feed dictionary for the decoder object

Parameters:
  • dataset – The dataset to use for the decoder.
  • train – Boolean flag, telling whether this is a training run
step(att_objects: typing.List[neuralmonkey.decoding_function.BaseAttention], bs_state: neuralmonkey.decoders.beam_search_decoder.SearchState) → typing.Tuple[neuralmonkey.decoders.beam_search_decoder.SearchState, neuralmonkey.decoders.beam_search_decoder.SearchStepOutput]
vocabulary
class neuralmonkey.decoders.beam_search_decoder.SearchState(logprob_sum, lengths, finished, last_word_ids, last_state, last_attns)

Bases: tuple

finished

Alias for field number 2

last_attns

Alias for field number 5

last_state

Alias for field number 4

last_word_ids

Alias for field number 3

lengths

Alias for field number 1

logprob_sum

Alias for field number 0

class neuralmonkey.decoders.beam_search_decoder.SearchStepOutput(scores, parent_ids, token_ids)

Bases: tuple

parent_ids

Alias for field number 1

scores

Alias for field number 0

token_ids

Alias for field number 2

neuralmonkey.decoders.ctc_decoder module

class neuralmonkey.decoders.ctc_decoder.CTCDecoder(name: str, encoder: typing.Any, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, merge_repeated_targets: bool = False, merge_repeated_outputs: bool = True, beam_width: int = 1, save_checkpoint: typing.Union[str, NoneType] = None, load_checkpoint: typing.Union[str, NoneType] = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

Connectionist Temporal Classification.

See tf.nn.ctc_loss, tf.nn.ctc_greedy_decoder etc.

cost
decoded
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]
input_lengths
logits
runtime_loss
train_loss
train_mode
train_targets

neuralmonkey.decoders.decoder module

class neuralmonkey.decoders.decoder.Decoder(encoders: typing.List[typing.Any], vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, name: str, max_output_len: int, dropout_keep_prob: float = 1.0, rnn_size: typing.Union[int, NoneType] = None, embedding_size: typing.Union[int, NoneType] = None, output_projection: typing.Union[typing.Callable[[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, typing.List[tensorflow.python.framework.ops.Tensor]], tensorflow.python.framework.ops.Tensor], NoneType] = None, encoder_projection: typing.Union[typing.Callable[[tensorflow.python.framework.ops.Tensor, typing.Union[int, NoneType], typing.Union[typing.List[typing.Any], NoneType]], tensorflow.python.framework.ops.Tensor], NoneType] = None, use_attention: bool = False, embeddings_encoder: typing.Any = None, attention_on_input: bool = True, rnn_cell: str = 'GRU', conditional_gru: bool = False, save_checkpoint: typing.Union[str, NoneType] = None, load_checkpoint: typing.Union[str, NoneType] = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

A class that manages parts of the computation graph that are used for the decoding.

embed_and_dropout(inputs: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor

Embed the input using the embedding matrix and apply dropout

Parameters:inputs – The Tensor to be embedded and dropped out.
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]

Populate the feed dictionary for the decoder object

Parameters:
  • dataset – The dataset to use for the decoder.
  • train – Boolean flag, telling whether this is a training run
get_attention_object(encoder, train_mode: bool)
step(att_objects: typing.List[neuralmonkey.decoding_function.BaseAttention], input_: tensorflow.python.framework.ops.Tensor, prev_state: tensorflow.python.framework.ops.Tensor, prev_attns: typing.List[tensorflow.python.framework.ops.Tensor])

neuralmonkey.decoders.encoder_projection module

This module contains different variants of projection of encoders into the initial state of the decoder.

neuralmonkey.decoders.encoder_projection.concat_encoder_projection(train_mode: tensorflow.python.framework.ops.Tensor, rnn_size: typing.Union[int, NoneType] = None, encoders: typing.Union[typing.List[typing.Any], NoneType] = None) → tensorflow.python.framework.ops.Tensor

Create the initial state by concatenating the encoders’ encoded values

Parameters:
  • train_mode – tf 0-D bool Tensor specifying the training mode (not used)
  • rnn_size – The size of the resulting vector (not used)
  • encoders – The list of encoders
neuralmonkey.decoders.encoder_projection.empty_initial_state(train_mode: tensorflow.python.framework.ops.Tensor, rnn_size: typing.Union[int, NoneType], encoders: typing.Union[typing.List[typing.Any], NoneType] = None) → tensorflow.python.framework.ops.Tensor

Return an empty vector

Parameters:
  • train_mode – tf 0-D bool Tensor specifying the training mode (not used)
  • rnn_size – The size of the resulting vector
  • encoders – The list of encoders (not used)
neuralmonkey.decoders.encoder_projection.linear_encoder_projection(dropout_keep_prob: float) → typing.Callable[[tensorflow.python.framework.ops.Tensor, typing.Union[int, NoneType], typing.Union[typing.List[typing.Any], NoneType]], tensorflow.python.framework.ops.Tensor]

Return a projection function which applies dropout on concatenated encoder final states and returns a linear projection to a rnn_size-sized tensor.

Parameters:dropout_keep_prob – The dropout keep probability

neuralmonkey.decoders.output_projection module

This module contains different variants of projection functions for RNN outputs.

neuralmonkey.decoders.output_projection.maxout_output(maxout_size)

Compute RNN output out of the previous state and output, and the context tensors returned from attention mechanisms, as described in the article

This function corresponds to the equations for computation the t_tilde in the Bahdanau et al. (2015) paper, on page 14, with the maxout projection, before the last linear projection.

Parameters:maxout_size – The size of the hidden maxout layer in the deep output
Returns:Returns the maxout projection of the concatenated inputs
neuralmonkey.decoders.output_projection.mlp_output(layer_sizes, dropout_keep_prob=None, train_mode: tensorflow.python.framework.ops.Tensor = None, activation=<function tanh>)

Compute RNN deep output using the multilayer perceptron with a specified activation function. (Pascanu et al., 2013 [https://arxiv.org/pdf/1312.6026v5.pdf])

Parameters:
  • layer_sizes – A list of sizes of the hiddel layers of the MLP
  • dropout_plc – Dropout placeholder. TODO this is not going to work with current configuration
  • activation – The activation function to use in each layer.
neuralmonkey.decoders.output_projection.no_deep_output(prev_state, prev_output, ctx_tensors)

Compute RNN output out of the previous state and output, and the context tensors returned from attention mechanisms.

This function corresponds to the equations for computation the t_tilde in the Bahdanau et al. (2015) paper, on page 14, before the linear projection.

Parameters:
  • prev_state – Previous decoder RNN state. (Denoted s_i-1)
  • prev_output – Embedded output of the previous step. (y_i-1)
  • ctx_tensors – Context tensors computed by the attentions. (c_i)
Returns:

This function returns the concatenation of all its inputs.

neuralmonkey.decoders.sequence_classifier module

class neuralmonkey.decoders.sequence_classifier.SequenceClassifier(name: str, encoders: typing.List[typing.Any], vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, layers: typing.List[int], activation_fn: typing.Callable[[tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor] = <function relu>, dropout_keep_prob: float = 0.5, save_checkpoint: typing.Union[str, NoneType] = None, load_checkpoint: typing.Union[str, NoneType] = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

A simple MLP classifier over encoders.

The API pretends it is an RNN decoder which always generates a sequence of length exactly one.

cost
decoded
decoded_logits
decoded_seq
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]
gt_inputs
loss_with_decoded_ins
loss_with_gt_ins
runtime_logprobs
runtime_loss
train_loss
train_mode

neuralmonkey.decoders.sequence_labeler module

class neuralmonkey.decoders.sequence_labeler.SequenceLabeler(name: str, encoder: neuralmonkey.encoders.sentence_encoder.SentenceEncoder, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, dropout_keep_prob: float = 1.0, save_checkpoint: typing.Union[str, NoneType] = None, load_checkpoint: typing.Union[str, NoneType] = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

Classifier assing a label to each encoder’s state.

cost
decoded
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]
logits
logprobs
runtime_loss
train_loss
train_mode
train_targets
train_weights

neuralmonkey.decoders.sequence_regressor module

class neuralmonkey.decoders.sequence_regressor.SequenceRegressor(name: str, encoders: typing.List[typing.Any], data_id: str, layers: typing.List[int] = None, activation_fn: typing.Callable[[tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor] = <function relu>, dropout_keep_prob: float = 1.0, dimension: int = 1, save_checkpoint: str = None, load_checkpoint: str = None) → None

Bases: neuralmonkey.model.model_part.ModelPart

A simple MLP regression over encoders.

The API pretends it is an RNN decoder which always generates a sequence of length exactly one.

cost
decoded
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]
predictions
runtime_loss
train_inputs
train_loss
train_mode

neuralmonkey.decoders.word_alignment_decoder module

class neuralmonkey.decoders.word_alignment_decoder.WordAlignmentDecoder(encoder: neuralmonkey.encoders.sentence_encoder.SentenceEncoder, decoder: neuralmonkey.decoders.decoder.Decoder, data_id: str, name: str) → None

Bases: neuralmonkey.model.model_part.ModelPart

A decoder that computes soft alignment from an attentive encoder. Loss is computed as cross-entropy against a reference alignment.

cost
feed_dict(dataset: neuralmonkey.dataset.Dataset, train: bool = False) → typing.Dict[tensorflow.python.framework.ops.Tensor, typing.Any]

Module contents