neuralmonkey.attention.stateful_context module¶
-
class
neuralmonkey.attention.stateful_context.
StatefulContext
(name: str, encoder: neuralmonkey.model.stateful.Stateful, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None) → None¶ Bases:
neuralmonkey.attention.base_attention.BaseAttention
Provides a Stateful encoder’s output as context to a decoder.
This is not really an attention mechanism, but rather a hack which (mis)uses the attention interface to provide a “static” context vector to the decoder cell. In other words, the context vector is the same for all positions in the sequence and doesn’t depend on the query vector.
To use this, simply pass an instance of this class to the decoder using the attentions parameter.
-
__init__
(name: str, encoder: neuralmonkey.model.stateful.Stateful, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None) → None¶ Create a new
BaseAttention
object.
-
attention
(query: tensorflow.python.framework.ops.Tensor, decoder_prev_state: tensorflow.python.framework.ops.Tensor, decoder_input: tensorflow.python.framework.ops.Tensor, loop_state: neuralmonkey.attention.namedtuples.AttentionLoopState) → Tuple[tensorflow.python.framework.ops.Tensor, neuralmonkey.attention.namedtuples.AttentionLoopState]¶ Get context vector for a given query.
-
attention_mask
¶
-
attention_states
¶
-
context_vector_size
¶ Return the static size of the context vector.
Returns: An integer specifying the context vector dimension.
-
finalize_loop
(key: str, last_loop_state: neuralmonkey.attention.namedtuples.AttentionLoopState) → None¶ Store the attention histories from loop state under a given key.
Parameters: - key – The key to the histories dictionary to store the data in.
- last_loop_state – The loop state object from the last state of the decoding loop.
-
initial_loop_state
() → neuralmonkey.attention.namedtuples.AttentionLoopState¶ Get initial loop state for the attention object.
Returns: The newly created initial loop state object.
-
state_size
¶
-
visualize_attention
(key: str, max_outputs: int = 16) → None¶ Include the attention histories under a given key into a summary.
Parameters: - key – The key to the attention histories dictionary.
- max_outputs – Maximum number of images to save.
-