neuralmonkey.evaluators.mse module

class neuralmonkey.evaluators.mse.MeanSquaredErrorEvaluator(name: str = None) → None

Bases: neuralmonkey.evaluators.evaluator.SequenceEvaluator

Mean squared error evaluator.

Assumes equal vector length across the batch (see SequenceEvaluator.score_batch)

static compare_scores(score2: float) → int

Compare scores using this evaluator.

The default implementation regards the bigger score as better.

Parameters:
  • score1 – The first score.
  • score2 – The second score.
Returns
An int. When score1 is better, returns 1. When score2 is better, returns -1. When the scores are equal, returns 0.
score_token(hyp_elem: float, ref_elem: float) → float

Score a single hyp/ref pair of tokens.

The default implementation returns 1.0 if the tokens are equal, 0.0 otherwise.

Parameters:
  • hyp_token – A prediction token.
  • ref_token – A golden token.
Returns:

A score for the token hyp/ref pair.

class neuralmonkey.evaluators.mse.PairwiseMeanSquaredErrorEvaluator(name: str = None) → None

Bases: neuralmonkey.evaluators.evaluator.Evaluator

Pairwise mean squared error evaluator.

For vectors of different dimension across the batch.

static compare_scores(score2: float) → int

Compare scores using this evaluator.

The default implementation regards the bigger score as better.

Parameters:
  • score1 – The first score.
  • score2 – The second score.
Returns
An int. When score1 is better, returns 1. When score2 is better, returns -1. When the scores are equal, returns 0.
score_instance(hypothesis: List[float], reference: List[float]) → float

Compute mean square error between two vectors.