PANDA (Pedantic ANswer-correctness Determination and Adjudication):Improving Automatic Evaluation for Question Answering and Text Generation
Paper
β’
2402.11161
β’
Published
β’
1
A fast and lightweight Python package for evaluating question-answering models and prompting of black-box and open-source large language models.
pip install qa-metricsis all you need!
pip install qa-metrics
Our package offers six QA evaluation methods with varying strengths:
| Method | Best For | Cost | Correlation with Human Judgment |
|---|---|---|---|
| Normalized Exact Match | Short-form QA (NQ-OPEN, HotpotQA, etc.) | Free | Good |
| PEDANTS | Both short & medium-form QA | Free | Very High |
| Neural Evaluation | Both short & long-form QA | Free | High |
| Open Source LLM Evaluation | All QA types | Free | High |
| Black-box LLM Evaluation | All QA types | Paid | Highest |
em_match
Parameters
reference_answer (list of str): A list of gold (correct) answers to the questioncandidate_answer (str): The answer provided by a candidate that needs to be evaluatedReturns
boolean: True if there are any exact normalized matches between gold and candidate answersfrom qa_metrics.em import em_match
reference_answer = ["The Frog Prince", "The Princess and the Frog"]
candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
match_result = em_match(reference_answer, candidate_answer)
f1_score_with_precision_recall
Parameters
reference_answer (str): A gold (correct) answer to the questioncandidate_answer (str): The answer provided by a candidate that needs to be evaluatedReturns
dictionary: Contains the F1 score, precision, and recall between a gold and candidate answerf1_match
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatethreshold (float): F1 score threshold for considering a match (default: 0.5)Returns
boolean: True if F1 score exceeds threshold for any gold answerfrom qa_metrics.f1 import f1_match, f1_score_with_precision_recall
f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
get_score
Parameters
reference_answer (str): A Gold answercandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
float: The similarity score between two strings (0 to 1)get_highest_score
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
dictionary: Contains the gold answer and candidate answer pair with highest matching scoreget_scores
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
dictionary: Contains matching scores for all gold answer and candidate answer pairsevaluate
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
boolean: True if candidate answer matches any gold answerget_question_type
Parameters
reference_answer (list of str): List of gold answersquestion (str): The question being evaluatedReturns
list: The type of the question (what, who, when, how, why, which, where)get_judgement_type
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
list: A list revised rules applicable to judge answer correctnessfrom qa_metrics.pedant import PEDANT
pedant = PEDANT()
scores = pedant.get_scores(reference_answer, candidate_answer, question)
match_result = pedant.evaluate(reference_answer, candidate_answer, question)
get_score
Parameters
reference_answer (str): A Gold answercandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
float: The similarity score between two strings (0 to 1)get_highest_score
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
dictionary: Contains the gold answer and candidate answer pair with highest matching scoreget_scores
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
dictionary: Contains matching scores for all gold answer and candidate answer pairstransformer_match
Parameters
reference_answer (list of str): List of gold answerscandidate_answer (str): Candidate answer to evaluatequestion (str): The question being evaluatedReturns
boolean: True if transformer model considers candidate answer equivalent to any gold answerfrom qa_metrics.transformerMatcher import TransformerMatcher
### supports `zli12321/roberta-large-qa-evaluator`, `zli12321/answer_equivalence_bert`, `zli12321/answer_equivalence_distilbert`, `zli12321/answer_equivalence_roberta`, `zli12321/answer_equivalence_distilroberta`
tm = TransformerMatcher("zli12321/answer_equivalence_tiny_bert")
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
prompt_gpt
Parameters
prompt (str): The input prompt textmodel_engine (str): OpenAI model to use (e.g., 'gpt-3.5-turbo')temperature (float): Controls randomness (0-1)max_tokens (int): Maximum tokens in responsefrom qa_metrics.prompt_llm import CloseLLM
model = CloseLLM()
model.set_openai_api_key(YOUR_OPENAI_KEY)
result = model.prompt_gpt(prompt=prompt, model_engine='gpt-3.5-turbo')
prompt_claude
Parameters
prompt (str): The input prompt textmodel_engine (str): Claude model to useanthropic_version (str): API versionmax_tokens_to_sample (int): Maximum tokens in responsetemperature (float): Controls randomness (0-1)model = CloseLLM()
model.set_anthropic_api_key(YOUR_ANTHROPIC_KEY)
result = model.prompt_claude(prompt=prompt, model_engine='claude-v1')
prompt
Parameters
message (str): The input message textmodel_engine (str): Model to usetemperature (float): Controls randomness (0-1)max_tokens (int): Maximum tokens in responsefrom qa_metrics.prompt_open_llm import OpenLLM
model = OpenLLM()
model.set_deepinfra_key(YOUR_DEEPINFRA_KEY)
result = model.prompt(message=prompt, model_engine='mistralai/Mixtral-8x7B-Instruct-v0.1')
Our fine-tuned models are available on Huggingface:
@inproceedings{li-etal-2024-pedants,
title = "{PEDANTS}: Cheap but Effective and Interpretable Answer Equivalence",
author = "Li, Zongxia and
Mondal, Ishani and
Nghiem, Huy and
Liang, Yijun and
Boyd-Graber, Jordan Lee",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.548/",
doi = "10.18653/v1/2024.findings-emnlp.548",
pages = "9373--9398",
abstract = "Question answering (QA) can only make progress if we know if an answer is correct, but current answer correctness (AC) metrics struggle with verbose, free-form answers from large language models (LLMs). There are two challenges with current short-form QA evaluations: a lack of diverse styles of evaluation data and an over-reliance on expensive and slow LLMs. LLM-based scorers correlate better with humans, but this expensive task has only been tested on limited QA datasets. We rectify these issues by providing rubrics and datasets for evaluating machine QA adopted from the Trivia community. We also propose an efficient, and interpretable QA evaluation that is more stable than an exact match and neural methods (BERTScore)."
}
This project is licensed under the MIT License.
For questions or comments, please contact: zli12321@umd.edu