Skip to content

BERT Answer Relevance

Definitions

BERT Answer Relevance measures the semantic similarity between the Generated Answer and the Question

This metric leverages the BERT model to calculate semantic similarity.


Example Usage

Required data items: question, answer

from continuous_eval.metrics.generation.text import BertAnswerSimilarity
datum = {
"question": "Who wrote 'Romeo and Juliet'?",
"retrieved_context": ["William Shakespeare is the author of 'Romeo and Juliet'."],
"ground_truth_context": ["William Shakespeare is the author of 'Romeo and Juliet'."],
"answer": "Shakespeare wrote 'Romeo and Juliet'",
"ground_truths": [
"William Shakespeare wrote 'Romeo and Juliet",
"William Shakespeare",
"Shakespeare",
"Shakespeare is the author of 'Romeo and Juliet'"
]
}
metric = BertAnswerSimilarity()
print(metric(**datum))

Example Output

{
'bert_answer_relevance': 0.8146507143974304
}