# Similarity Functions¶

## bilinear¶

class `deep_qa.tensors.similarity_functions.bilinear.``Bilinear`(**kwargs)[source]

This similarity function performs a bilinear transformation of the two input vectors. This function has a matrix of weights W and a bias b, and the similarity between two vectors x and y is computed as x^T W y + b.

`compute_similarity`(tensor_1, tensor_2)[source]

Takes two tensors of the same shape, such as (batch_size, length_1, length_2, embedding_dim). Computes a (possibly parameterized) similarity on the final dimension and returns a tensor with one less dimension, such as (batch_size, length_1, length_2).

`initialize_weights`(tensor_1_dim: int, tensor_2_dim: int) → typing.List[typing.K.variable][source]

Called in a Layer.build() method that uses this SimilarityFunction, here we both initialize whatever weights are necessary for this similarity function, and return them so they can be included in Layer.trainable_weights.

Parameters: tensor_1_dim : int The last dimension (typically `embedding_dim`) of the first input tensor. We need this so we can initialize weights appropriately. tensor_2_dim : int The last dimension (typically `embedding_dim`) of the second input tensor. We need this so we can initialize weights appropriately.

## cosine_similarity¶

class `deep_qa.tensors.similarity_functions.cosine_similarity.``CosineSimilarity`(**kwargs)[source]

This similarity function simply computes the cosine similarity between each pair of vectors. It has no parameters.

`compute_similarity`(tensor_1, tensor_2)[source]

Takes two tensors of the same shape, such as (batch_size, length_1, length_2, embedding_dim). Computes a (possibly parameterized) similarity on the final dimension and returns a tensor with one less dimension, such as (batch_size, length_1, length_2).

`initialize_weights`(tensor_1_dim: int, tensor_2_dim: int) → typing.List[typing.K.variable][source]

Called in a Layer.build() method that uses this SimilarityFunction, here we both initialize whatever weights are necessary for this similarity function, and return them so they can be included in Layer.trainable_weights.

Parameters: tensor_1_dim : int The last dimension (typically `embedding_dim`) of the first input tensor. We need this so we can initialize weights appropriately. tensor_2_dim : int The last dimension (typically `embedding_dim`) of the second input tensor. We need this so we can initialize weights appropriately.

## dot_product¶

class `deep_qa.tensors.similarity_functions.dot_product.``DotProduct`(**kwargs)[source]

This similarity function simply computes the dot product between each pair of vectors. It has no parameters.

`compute_similarity`(tensor_1, tensor_2)[source]

Takes two tensors of the same shape, such as (batch_size, length_1, length_2, embedding_dim). Computes a (possibly parameterized) similarity on the final dimension and returns a tensor with one less dimension, such as (batch_size, length_1, length_2).

`initialize_weights`(tensor_1_dim: int, tensor_2_dim: int) → typing.List[typing.K.variable][source]

Called in a Layer.build() method that uses this SimilarityFunction, here we both initialize whatever weights are necessary for this similarity function, and return them so they can be included in Layer.trainable_weights.

Parameters: tensor_1_dim : int The last dimension (typically `embedding_dim`) of the first input tensor. We need this so we can initialize weights appropriately. tensor_2_dim : int The last dimension (typically `embedding_dim`) of the second input tensor. We need this so we can initialize weights appropriately.

## linear¶

class `deep_qa.tensors.similarity_functions.linear.``Linear`(combination: str = 'x, y', **kwargs)[source]

This similarity function performs a dot product between a vector of weights and some combination of the two input vectors. The combination done is configurable.

If the two vectors are x and y, we allow the following kinds of combinations: x, y, x*y, x+y, x-y, x/y, where each of those binary operations is performed elementwise. You can list as many combinations as you want, comma separated. For example, you might give “x,y,x*y” as the combination parameter to this class. The computed similarity function would then be w^T [x; y; x*y] + b, where w is a vector of weights, b is a bias parameter, and [;] is vector concatenation.

Note that if you want a bilinear similarity function with a diagonal weight matrix W, where the similarity function is computed as x * w * y + b (with w the diagonal of W), you can accomplish that with this class by using “x*y” for combination.

`_combine_tensors`(tensor_1, tensor_2)[source]
`_get_combination`(combination: str, tensor_1, tensor_2)[source]
`_get_combination_dim`(combination: str, tensor_1_dim: int, tensor_2_dim: int) → int[source]
`_get_combined_dim`(tensor_1_dim: int, tensor_2_dim: int) → int[source]
`compute_similarity`(tensor_1, tensor_2)[source]

Takes two tensors of the same shape, such as (batch_size, length_1, length_2, embedding_dim). Computes a (possibly parameterized) similarity on the final dimension and returns a tensor with one less dimension, such as (batch_size, length_1, length_2).

`initialize_weights`(tensor_1_dim: int, tensor_2_dim: int) → typing.List[typing.K.variable][source]

Called in a Layer.build() method that uses this SimilarityFunction, here we both initialize whatever weights are necessary for this similarity function, and return them so they can be included in Layer.trainable_weights.

Parameters: tensor_1_dim : int The last dimension (typically `embedding_dim`) of the first input tensor. We need this so we can initialize weights appropriately. tensor_2_dim : int The last dimension (typically `embedding_dim`) of the second input tensor. We need this so we can initialize weights appropriately.

## similarity_function¶

Similarity functions take a pair of tensors with the same shape, and compute a similarity function on the vectors in the last dimension. For example, the tensors might both have shape (batch_size, sentence_length, embedding_dim), and we will compute some function of the two vectors of length embedding_dim for each position (batch_size, sentence_length), returning a tensor of shape (batch_size, sentence_length).

The similarity function could be as simple as a dot product, or it could be a more complex, parameterized function. The SimilarityFunction class exposes an API for a Layer that wants to allow for multiple similarity functions, such as for initializing and returning weights.

If you want to compute a similarity between tensors of different sizes, you need to first tile them in the appropriate dimensions to make them the same before you can use these functions. The Attention and MatrixAttention layers do this.

class `deep_qa.tensors.similarity_functions.similarity_function.``SimilarityFunction`(name: str, initialization: str = 'glorot_uniform', activation: str = 'linear')[source]

Bases: `object`

`compute_similarity`(tensor_1, tensor_2)[source]

Takes two tensors of the same shape, such as (batch_size, length_1, length_2, embedding_dim). Computes a (possibly parameterized) similarity on the final dimension and returns a tensor with one less dimension, such as (batch_size, length_1, length_2).

`initialize_weights`(tensor_1_dim: int, tensor_2_dim: int) → typing.List[typing.K.variable][source]

Called in a Layer.build() method that uses this SimilarityFunction, here we both initialize whatever weights are necessary for this similarity function, and return them so they can be included in Layer.trainable_weights.

Parameters: tensor_1_dim : int The last dimension (typically `embedding_dim`) of the first input tensor. We need this so we can initialize weights appropriately. tensor_2_dim : int The last dimension (typically `embedding_dim`) of the second input tensor. We need this so we can initialize weights appropriately.