Query Clarity Evals
Sub-Query Completeness
Evaluates if the list of generated sub-questions comprehensively cover all aspects of the main question.
Sub-Query Completeness checks whether the sub-queries generated from a question are complete. This check considers all the sub-queries and evaluate if all of them taken together answers all aspects of the question or not
Columns required:
question
: The question asked by the usersub_questions
: Sub questions generated from the question
How to use it?
By default, we are using GPT 3.5 Turbo for evaluations. If you want to use a different model, check out this tutorial.
Sample Response:
A higher Sub-Query Completeness score reflects that the generated sub-questions cover all aspects of the question asked.
The sub_questions
do not contain some parts of the question
such as: “When was the Taj Mahal?”, “Who built the Taj Mahal?”, “Where is the the Taj Mahal?”
Resulting in low Sub-Query Completeness score.
How it works?
We evaluate Sub-Query Completeness by determining which of the following three cases apply for the given task data:
- Sub Questions collectively cover all the aspects of the main question.
- Sub Questions collectively cover only a few aspects of the main question.
- Sub Questions collectively does not cover any aspects of the main question.
Was this page helpful?