class deep_qa.training.models.DeepQaModel(*args, **kwargs)[source]

Bases: keras.engine.training.Model

This is a Model that adds functionality to Keras’ Model class. In particular, we use tensorflow optimisers directly in order to make use of sparse gradient updates, which Keras does not handle. Additionally, we provide some nicer summary functions which include mask information. We are overriding key components of Keras here and you should probably have a pretty good grip on the internals of Keras before you change stuff below, as there could be unexpected consequences.

_fit_loop(f: <built-in function callable>, ins: typing.List[<built-in function array>], out_labels: typing.List[str] = None, batch_size: int = 32, epochs: int = 100, verbose: int = 1, callbacks: typing.List[keras.callbacks.Callback] = None, val_f: <built-in function callable> = None, val_ins: typing.List[<built-in function array>] = None, shuffle: bool = True, callback_metrics: typing.List[str] = None, initial_epoch: int = 0)[source]

Abstract fit function which preprocesses and batches data before training a model. We override this keras backend function to support multi-gpu training via splitting a large batch size across multiple gpus. This function is broadly the same as the Keras backend version aside from this - changed elements have corresponding comments attached.

Note that this should not be called directly - it is used by calling model.fit().

Assume that step_function returns a list, labeled by out_labels.


f: A callable ``Step`` or a Keras ``Function``, required.

A DeepQA Step or Keras Function returning a list of tensors.

ins: List[numpy.array], required.

The list of tensors to be fed to step_function.

out_labels: List[str], optional (default = None).

The display names of the outputs of step_function.

batch_size: int, optional (default = 32).

The integer batch size.

epochs: int, optional (default = 100).

Number of times to iterate over the data.

verbose: int, optional, (default = 1)

Verbosity mode, 0, 1 or 2.

callbacks: List[Callback], optional (default = None).

A list of Keras callbacks to be called during training.

val_f: A callable ``Step`` or a Keras ``Function``, optional (default = None).

The Keras function to call for validation.

val_ins: List[numpy.array], optional (default)

A list of tensors to be fed to val_f.

shuffle: bool, optional (default = True).

whether to shuffle the data at the beginning of each epoch

callback_metrics: List[str], optional, (default = None).

A list of strings, the display names of the validation metrics. passed to the callbacks. They should be the concatenation of list the display names of the outputs of f and the list of display names of the outputs of f_val.

initial_epoch: int, optional (default = 0).

The epoch at which to start training (useful for resuming a previous training run).


A Keras History object.


We override this method so that we can use tensorflow optimisers directly. This is desirable as tensorflow handles gradients of sparse tensors efficiently.

_prepare_callbacks(callbacks: typing.List[keras.callbacks.Callback], val_ins: typing.List[<built-in function array>], epochs: int, batch_size: int, num_train_samples: int, callback_metrics: typing.List[str], do_validation: bool, verbose: int)[source]

Sets up Keras callbacks to perform various monitoring functions during training.

compile(params: deep_qa.common.params.Params)[source]

The only reason we are overriding this method is because keras automatically wraps our tensorflow optimiser in a keras wrapper, which we don’t want. We override the only method in Model which uses this attribute, _make_train_function, which raises an error if compile is not called first. As we move towards using a Tensorflow first optimisation loop, more things will be added here which add functionality to the way Keras runs tensorflow Session calls.

summary(show_masks=False, **kwargs)[source]
train_on_batch(x: typing.List[<built-in function array>], y: typing.List[<built-in function array>], sample_weight: typing.List[<built-in function array>] = None, class_weight: typing.Dict[int, <built-in function array>] = None)[source]

Runs a single gradient update on a single batch of data. We override this method in order to provide multi-gpu training capability.


x: List[numpy.array], required

Numpy array of training data, or list of Numpy arrays if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names to Numpy arrays.

y: List[numpy.array], required

A Numpy array of labels, or list of Numpy arrays if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names to Numpy arrays.

sample_weight: List[numpy.array], optional (default = None)

optional array of the same length as x, containing weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sample_weight_mode=”temporal” in compile().

class_weight: optional dictionary mapping

class indices (integers) to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class.


Scalar training loss

(if the model has a single output and no metrics)

or list of scalars (if the model has multiple outputs

and/or metrics). The attribute model.metrics_names will give you

the display labels for the scalar outputs.

deep_qa.training.models.count_total_params(layers, layer_set=None)[source]
deep_qa.training.models.print_layer_summary(layer, relevant_nodes, positions)[source]
deep_qa.training.models.print_row(fields, positions)[source]
deep_qa.training.models.print_summary_with_masking(layers, relevant_nodes=None)[source]


It turns out that Keras’ design is somewhat crazy*, and there is no list of optimizers that you can just import from Keras. So, this module specifies a list, and a helper function or two for dealing with optimizer parameters. Unfortunately, this means that we have a list that must be kept in sync with Keras. Oh well.

* Have you seen their get_from_module() method? See here: https://github.com/fchollet/keras/blob/6e42b0e4a77fb171295b541a6ae9a3a4a79f9c87/keras/utils/generic_utils.py#L10. That method means I could pass in ‘clip_norm’ as an optimizer, and it would try to use that function as an optimizer. It also means there is no simple list of implemented optimizers I can grab.

* I should also note that Keras is an incredibly useful library that does a lot of things really well. It just has a few quirks...

deep_qa.training.optimizers.optimizer_from_params(params: typing.Union[deep_qa.common.params.Params, str])[source]

This method converts from a parameter object like we use in our Trainer code into an optimizer object suitable for use with Keras. The simplest case for both of these is a string that shows up in optimizers above - if params is just one of those strings, we return it, and everyone is happy. If not, we assume params is a Dict[str, Any], with a “type” key, where the value for “type” must be one of those strings above. We take the rest of the parameters and pass them to the optimizer’s constructor.