neuralmonkey.decoders.beam_search_decoder module¶
Beam search decoder.
This module implements the beam search algorithm for autoregressive decoders.
As any autoregressive decoder, this decoder works dynamically, which means
it uses the tf.while_loop
function conditioned on both maximum output
length and list of finished hypotheses.
The beam search decoder uses four data strcutures during the decoding process.
SearchState
, SearchResults
, BeamSearchLoopState
, and
BeamSearchOutput
. The purpose of these is described in their own docstring.
These structures help the decoder to keep track of the decoding, enabling it to be called e.g. during ensembling, when the content of the structures can be changed and then fed back to the model.
The implementation mimics the API of the AutoregressiveDecoder
class. There
are functions that prepare and return values that are supplied to the
tf.while_loop
function.

class
neuralmonkey.decoders.beam_search_decoder.
BeamSearchDecoder
(name: str, parent_decoder: neuralmonkey.decoders.autoregressive.AutoregressiveDecoder, beam_size: int, max_steps: int, length_normalization: float) → None¶ Bases:
neuralmonkey.model.model_part.ModelPart
Ingraph beam search decoder.
The hypothesis scoring algorithm is taken from https://arxiv.org/pdf/1609.08144.pdf. Length normalization is parameter alpha from equation 14.

__init__
(name: str, parent_decoder: neuralmonkey.decoders.autoregressive.AutoregressiveDecoder, beam_size: int, max_steps: int, length_normalization: float) → None¶ Construct the beam search decoder graph.
Parameters:  name – The name for the model part.
 parent_decoder – An autoregressive decoder from which to sample.
 beam_size – The number of hypotheses in the beam.
 max_steps – The maximum number of time steps to perform.
 length_normalization – The alpha parameter from Eq. 14 in the paper.

decoder_state
¶

decoding_loop
() → neuralmonkey.decoders.beam_search_decoder.BeamSearchOutput¶ Create the decoding loop.
This function mimics the behavior of the
decoding_loop
method of theAutoregressiveDecoder
, except the initial loop state is created outside this method because it is accessed and fed during ensembling.TODO: The
finalize_loop
method and the handling of attention loop states might be implemented in the future.Returns: This method returns a populated BeamSearchOutput
object.

expand_to_beam
(val: tensorflow.python.framework.ops.Tensor, dim: int = 0) → tensorflow.python.framework.ops.Tensor¶ Copy a tensor along a new beam dimension.
Parameters:  val – The
Tensor
to expand.  dim – The dimension along which to expand. Usually, the batch axis.
Returns: The expanded tensor.
 val – The

get_body
() → Callable[[Any], neuralmonkey.decoders.beam_search_decoder.BeamSearchLoopState]¶ Return a body function for
tf.while_loop
.Returns: A function that performs a single decoding step.

get_initial_loop_state
() → neuralmonkey.decoders.beam_search_decoder.BeamSearchLoopState¶ Construct the initial loop state for the beam search decoder.
During the construction, the body function of the underlying decoder is called once to retrieve the initial log probabilities of the first token.
The values are initialized as follows:
search_state
logprob_sum
 For each sentence in batch, logprob sum of the first hypothesis in the beam is set to zero while the others are set to negative infinity.prev_logprobs
 This is the softmax over the logits from the initial decoder step.lengths
 All zeros.finshed
 All false.
search_results
scores
 A (batch, beam)sized tensor of zeros.token_ids
 A (1, batch, beam)sized tensor filled with indices of decoderspecific initial input symbols (usually start symbol IDs).
decoder_loop_state
 The loop state of the underlying autoregressive decoder, as returned from the initial call to the body function.
Returns: A populated BeamSearchLoopState
structure.

loop_continue_criterion
(*args) → tensorflow.python.framework.ops.Tensor¶ Decide whether to break out of the while loop.
The criterion for stopping the loop is that either all hypotheses are finished or a maximum number of steps has been reached. Here the number of steps is the number of steps of the underlying decoder minus one, because this function is evaluated after the decoder step has been called and its step has been incremented. This is caused by the fact that we call the decoder body function at the end of the beam body function. (And that, in turn, is to support ensembling.)
Parameters: args – A BeamSearchLoopState
instance.Returns: A scalar boolean Tensor
.

search_results
¶

search_state
¶

vocabulary
¶


class
neuralmonkey.decoders.beam_search_decoder.
BeamSearchLoopState
¶ Bases:
neuralmonkey.decoders.beam_search_decoder.BeamSearchLoopState
The loop state of the beam search decoder.
A loop state object that is used for transferring data between cycles through the symbolic while loop. It groups together the
SearchState
andSearchResults
structures and also keeps track of the underlying decoder loop state.
search_state
¶ A
SearchState
object representing the current search state.

search_results
¶ The growing
SearchResults
object which accummulates the outputs of the decoding process.

decoder_loop_state
¶ The current loop state of the underlying autoregressive decoder.


class
neuralmonkey.decoders.beam_search_decoder.
BeamSearchOutput
¶ Bases:
neuralmonkey.decoders.beam_search_decoder.BeamSearchOutput
The final structure that is returned from the while loop.

last_search_step_output
¶ A populated
SearchResults
object.

last_dec_loop_state
¶ Final loop state of the underlying decoder.

last_search_state
¶ Final loop state of the beam search decoder.

attention_loop_states
¶ The final loop states of the attention objects.


class
neuralmonkey.decoders.beam_search_decoder.
SearchResults
¶ Bases:
neuralmonkey.decoders.beam_search_decoder.SearchResults
The intermediate results of the beam search decoding.
A cummulative structure that holds the actual decoded tokens and hypotheses scores (after applying a length penalty term).

scores
¶ A
(time, batch, beam)
shaped tensor with the scores for each hypothesis. The score is computed from thelogprob_sum
of a hypothesis and accounting for the hypothesis length.

token_ids
¶ A
(time, batch, beam)
shaped tensor with the vocabulary indices of the tokens in each hypothesis.


class
neuralmonkey.decoders.beam_search_decoder.
SearchState
¶ Bases:
neuralmonkey.decoders.beam_search_decoder.SearchState
Search state of a beam search decoder.
This structure keeps track of a current state of the beam search algorithm. The search state contains tensors that represent hypotheses in the beam, namely their log probability, length, and distribution over the vocabulary when decoding the last word, as well as if the hypothesis is finished or not.

logprob_sum
¶ A
(batch, beam)
shaped tensor with the sums of token logprobabilities of each hypothesis.

prev_logprobs
¶ A
(batch, beam, vocabulary)
sized tensor. Stores the logdistribution over the vocabulary from the previous decoding step for each hypothesis.

lengths
¶ A
(batch, beam)
shaped tensor with the lengths of the hypotheses.

finished
¶ A boolean tensor with shape
(batch, beam)
. Marks finished and unfinished hypotheses.
