neuralmonkey.encoders.recurrent module¶
-
class
neuralmonkey.encoders.recurrent.
DeepSentenceEncoder
(name: str, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, embedding_size: int, rnn_sizes: List[int], rnn_directions: List[str], rnn_cell: str = 'GRU', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, embedding_initializer: Callable = None) → None¶ Bases:
neuralmonkey.encoders.recurrent.SentenceEncoder
-
__init__
(name: str, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, embedding_size: int, rnn_sizes: List[int], rnn_directions: List[str], rnn_cell: str = 'GRU', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, embedding_initializer: Callable = None) → None¶ Create a new instance of the deep sentence encoder.
Parameters: - name – ModelPart name.
- vocabulary – The input vocabulary.
- data_id – The input sequence data ID.
- embedding_size – The dimension of the embedding vectors in the input sequence.
- max_input_len – Maximum length of the input sequence (disregard tokens after this position).
- rnn_sizes – The list of dimensions of the RNN hidden state vectors in respective layers.
- rnn_cell – One of “GRU”, “NematusGRU”, “LSTM”. Which kind of memory cell to use.
- rnn_directions – The list of rnn directions in the respective layers. Should be equally long as rnn_sizes. Each item must be one of “forward”, “backward”, “bidirectional”. Determines in what order to process the input sequence. Note that choosing “bidirectional” will double the resulting vector dimension as well as the number of the parameters in the given layer.
- add_residual – Add residual connections to each RNN layer output.
- dropout_keep_prob – 1 - dropout probability.
- save_checkpoint – ModelPart save checkpoint file.
- load_checkpoint – ModelPart load checkpoint file.
-
rnn
¶ Run stacked RNN given sizes and directions.
Inputs of the first RNN are the RNN inputs to the encoder. Outputs from each layer are used as inputs to the next one. As a final state of the stacked RNN, the final state of the final layer is used.
-
-
class
neuralmonkey.encoders.recurrent.
FactoredEncoder
(name: str, vocabularies: List[neuralmonkey.vocabulary.Vocabulary], data_ids: List[str], embedding_sizes: List[int], rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, input_initializers: List[Tuple[str, Callable]] = None) → None¶ Bases:
neuralmonkey.encoders.recurrent.RecurrentEncoder
-
__init__
(name: str, vocabularies: List[neuralmonkey.vocabulary.Vocabulary], data_ids: List[str], embedding_sizes: List[int], rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, input_initializers: List[Tuple[str, Callable]] = None) → None¶ Create a new instance of the factored encoder.
Parameters: - name – ModelPart name.
- vocabularies – The vocabularies for each factor.
- data_ids – The input sequence data ID for each factor.
- embedding_sizes – The dimension of the embedding vectors in the input sequence for each factor.
- max_input_len – Maximum length of the input sequence (disregard tokens after this position).
- rnn_size – The dimension of the RNN hidden state vector.
- rnn_cell – One of “GRU”, “NematusGRU”, “LSTM”. Which kind of memory cell to use.
- rnn_direction – One of “forward”, “backward”, “bidirectional”. In what order to process the input sequence. Note that choosing “bidirectional” will double the resulting vector dimension as well as the number of encoder parameters.
- add_residual – Add residual connections to the RNN layer output.
- dropout_keep_prob – 1 - dropout probability.
- save_checkpoint – ModelPart save checkpoint file.
- load_checkpoint – ModelPart load checkpoint file.
-
-
class
neuralmonkey.encoders.recurrent.
RNNSpec
¶ Bases:
neuralmonkey.encoders.recurrent.RNNSpec
Recurrent neural network specifications.
-
size
¶ The state size.
-
direction
¶ The RNN processing direction. One of
forward
,backward
, andbidirectional
.
-
cell_type
¶ The recurrent cell type to use. Refer to
encoders.recurrent.RNN_CELL_TYPES
for possible values.
-
-
class
neuralmonkey.encoders.recurrent.
RecurrentEncoder
(name: str, input_sequence: neuralmonkey.model.stateful.TemporalStateful, rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None) → None¶ Bases:
neuralmonkey.model.model_part.ModelPart
,neuralmonkey.model.stateful.TemporalStatefulWithOutput
-
__init__
(name: str, input_sequence: neuralmonkey.model.stateful.TemporalStateful, rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None) → None¶ Create a new instance of a recurrent encoder.
Parameters: - name – ModelPart name.
- input_seqeunce – The input sequence for the encoder.
- rnn_size – The dimension of the RNN hidden state vector.
- rnn_cell – One of “GRU”, “NematusGRU”, “LSTM”. Which kind of memory cell to use.
- rnn_direction – One of “forward”, “backward”, “bidirectional”. In what order to process the input sequence. Note that choosing “bidirectional” will double the resulting vector dimension as well as the number of encoder parameters.
- add_residual – Add residual connections to the RNN layer output.
- dropout_keep_prob – 1 - dropout probability.
- save_checkpoint – ModelPart save checkpoint file.
- load_checkpoint – ModelPart load checkpoint file.
-
output
¶ Return the object output.
A 2D Tensor of shape (batch, state_size) which contains the resulting state of the object.
-
rnn
¶
-
rnn_input
¶
-
temporal_mask
¶ Return mask for the temporal_states.
A 2D Tensor of shape (batch, time) of type float32 which masks the temporal states so each sequence can have a different length. It should only contain ones or zeros.
-
temporal_states
¶ Return object states in time.
A 3D Tensor of shape (batch, time, state_size) which contains the states of the object in time (e.g. hidden states of a recurrent encoder.
-
-
class
neuralmonkey.encoders.recurrent.
SentenceEncoder
(name: str, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, embedding_size: int, rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, embedding_initializer: Callable = None) → None¶ Bases:
neuralmonkey.encoders.recurrent.RecurrentEncoder
-
__init__
(name: str, vocabulary: neuralmonkey.vocabulary.Vocabulary, data_id: str, embedding_size: int, rnn_size: int, rnn_cell: str = 'GRU', rnn_direction: str = 'bidirectional', add_residual: bool = False, max_input_len: int = None, dropout_keep_prob: float = 1.0, reuse: neuralmonkey.model.model_part.ModelPart = None, save_checkpoint: str = None, load_checkpoint: str = None, initializers: List[Tuple[str, Callable]] = None, embedding_initializer: Callable = None) → None¶ Create a new instance of the sentence encoder.
Parameters: - name – ModelPart name.
- vocabulary – The input vocabulary.
- data_id – The input sequence data ID.
- embedding_size – The dimension of the embedding vectors in the input sequence.
- max_input_len – Maximum length of the input sequence (disregard tokens after this position).
- rnn_size – The dimension of the RNN hidden state vector.
- rnn_cell – One of “GRU”, “NematusGRU”, “LSTM”. Which kind of memory cell to use.
- rnn_direction – One of “forward”, “backward”, “bidirectional”. In what order to process the input sequence. Note that choosing “bidirectional” will double the resulting vector dimension as well as the number of encoder parameters.
- add_residual – Add residual connections to the RNN layer output.
- dropout_keep_prob – 1 - dropout probability.
- save_checkpoint – ModelPart save checkpoint file.
- load_checkpoint – ModelPart load checkpoint file.
-
-
neuralmonkey.encoders.recurrent.
rnn_layer
(rnn_input: tensorflow.python.framework.ops.Tensor, lengths: tensorflow.python.framework.ops.Tensor, rnn_spec: neuralmonkey.encoders.recurrent.RNNSpec, add_residual: bool) → Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor]¶ Construct a RNN layer given its inputs and specs.
Parameters: - rnn_inputs – The input sequence to the RNN.
- lengths – Lengths of input sequences.
- rnn_spec – A valid RNNSpec tuple specifying the network architecture.
- add_residual – Add residual connections to the layer output.