Sub-Question Query Generation Evaluation
The SubQuestionQueryGeneration operator decomposes a question into sub-questions, generating responses for each using a RAG query engine. Given the complexity, we include the previous evaluations and add:
- Sub Query Completeness: Assures that the sub-questions accurately and comprehensively cover the original query.
How to do it?
Install UpTrain and LlamaIndex
Import required libraries
Setup UpTrain Open-Source Software (OSS)
You can use the open-source evaluation service to evaluate your model. In this case, you will need to provie an OpenAI API key. You can get yours here.
Parameters:
key_type
=“openai”api_key
=“OPENAI_API_KEY”project_name_prefix
=“PROJECT_NAME_PREFIX”
Load and Parse Documents
Load documents from Paul Graham’s essay “What I Worked On”.
Parse the document into nodes.
Sub-Question Query Generation Evaluation
The sub question query engine is used to tackle the problem of answering a complex query using multiple data sources. It first breaks down the complex query into sub questions for each relevant data source, then gather all the intermediate responses and synthesizes a final response.
UpTrain callback handler will automatically capture the sub-question and the responses for each of them once generated and will run the following three evaluations (Graded from 0 to 1) on the response:
- Context Relevance: Determines if the context extracted from the query is relevant to the response.
- Factual Accuracy: Assesses if the LLM is hallcuinating or providing incorrect information.
- Response Completeness: Checks if the response contains all the information requested by the query.
In addition to the above evaluations, the callback handler will also run the following evaluation:
- Sub Query Completeness: Checks if the sub-questions accurately and completely cover the original query.
Was this page helpful?