Wrappers

EncoderWrapper

class deep_qa.layers.wrappers.encoder_wrapper.EncoderWrapper(layer, keep_dims=False, **kwargs)[source]

Bases: deep_qa.layers.wrappers.time_distributed.TimeDistributed

This class TimeDistributes a sentence encoder, applying the encoder to several word sequences. The only difference between this and the regular TimeDistributed is in how we handle the mask. Typically, an encoder will handle masked embedded input, and return None as its mask, as it just returns a vector and no more masking is necessary. However, if the encoder is TimeDistributed, we might run into a situation where _all_ of the words in a given sequence are masked (because we padded the number of sentences, for instance). In this case, we just want to mask the entire sequence. EncoderWrapper returns a mask with the same dimension as the input sequences, where sequences are masked if _all_ of their words were masked.

Notes

For seq2seq encoders, one should use either TimeDistributed or TimeDistributedWithMask since EncoderWrapper reduces the dimensionality of the input mask.

compute_mask(x, input_mask=None)[source]

OutputMask

class deep_qa.layers.wrappers.output_mask.OutputMask(**kwargs)[source]

Bases: deep_qa.layers.masked_layer.MaskedLayer

This Layer is purely for debugging. You can wrap this on a layer’s output to get the mask output by that layer as a model output, for easier visualization of what the model is actually doing.

Don’t try to use this in an actual model.

compute_mask(inputs, mask=None)[source]

Computes an output mask tensor.

# Arguments
inputs: Tensor or list of tensors. mask: Tensor or list of tensors.
# Returns
None or a tensor (or list of tensors,
one per output tensor of the layer).

TimeDistributed

class deep_qa.layers.wrappers.time_distributed.TimeDistributed(layer, keep_dims=False, **kwargs)[source]

Bases: keras.layers.wrappers.TimeDistributed

This class fixes two bugs in Keras: (1) the input mask is not passed to the wrapped layer, and (2) Keras’ TimeDistributed currently only allows a single input, not a list. We currently don’t handle the case where the _output_ of the wrapped layer is a list, however. (Not that that’s particularly hard, we just haven’t needed it yet, so haven’t implemented it.)

Notes

If the output shape for TimeDistributed has a final dimension of 1, we essentially sqeeze it, reshaping to have one fewer dimension. That change takes place in the actual call method as well as the compute_output_shape method.

build(input_shape)[source]
compute_mask(inputs, mask=None)[source]

Computes an output mask tensor.

# Arguments
inputs: Tensor or list of tensors. mask: Tensor or list of tensors.
# Returns
None or a tensor (or list of tensors,
one per output tensor of the layer).
compute_output_shape(input_shape)[source]
get_config()[source]
get_output_mask_shape_for(input_shape)[source]
static reshape_inputs_and_masks(inputs, masks)[source]