How to do it?

1

Install UpTrain and LlamaIndex

pip install uptrain llama_index
2

Import required libraries

import os
import openai
import pandas as pd

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from uptrain import Evals, EvalLlamaIndex, Settings
3

Create the dataset folder for the query engine

You can use any documents that you have to do thid. For this tutorial, we will use data on New York City extracted from wikipedia. We will only add one document to the folder, but you can add as many as you want.

url = "https://uptrain-assets.s3.ap-south-1.amazonaws.com/data/nyc_text.txt"
if not os.path.exists('nyc_wikipedia'):
   os.makedirs('nyc_wikipedia')
dataset_path = os.path.join('./nyc_wikipedia', "nyc_text.txt")

if not os.path.exists(dataset_path):
    import httpx
    r = httpx.get(url)
    with open(dataset_path, "wb") as f:
        f.write(r.content)
4

Make a list of queries

Before we can generate responses, we need to create a list of queries. Since the query engine is trained on New York City, we will create a list of queries related to New York City.

data = [
    {"question": "What is the population of New York City?"},
    {"question": "What is the area of New York City?"},
    {"question": "What is the largest borough in New York City?"},
    {"question": "What is the average temperature in New York City?"},
    {"question": "What is the main airport in New York City?"},
    {"question": "What is the famous landmark in New York City?"},
    {"question": "What is the official language of New York City?"},
    {"question": "What is the currency used in New York City?"},
    {"question": "What is the time zone of New York City?"},
    {"question": "What is the famous sports team in New York City?"}
]

This notebook uses the OpenAI API to generate text for prompts as well as to create the Vector Store Index. So, set openai.api_key to your OpenAI API key.

openai.api_key = "sk-************************"  # your OpenAI API key
5

Create a query engine using LlamaIndex

Let’s create a vector store index using LLamaIndex and then use that as a query engine to retrieve relevant sections from the documentation.

documents = SimpleDirectoryReader("./nyc_wikipedia/").load_data()

vector_index = VectorStoreIndex.from_documents(
    documents, service_context=ServiceContext.from_defaults(chunk_size=512)
)

query_engine = vector_index.as_query_engine()
6

Create the Settings object

settings = Settings(
    openai_api_key=openai.api_key,
)
7

Create the EvalLlamaIndex object

Now that we have created the query engine, we can use it to create an EvalLlamaIndex object. This object will be used to generate responses for the queries.

llamaindex_object = EvalLlamaIndex(settings=settings, query_engine=query_engine)
8

Run the evaluations

Now that we have the list of queries, we can use the EvalLlamaIndex object to generate responses for the queries and then perform evaluations on the responses. You can find an extensive list of the evaluations offered by UpTrain here. We have chosen two that we found to be the most relevant for this tutorial:

  1. Context Relevance: This evaluation checks whether the retrieved context is relevant to the query. This is important because the retrieved context is used to generate the response. If the retrieved context is not relevant to the query, then the response will not be relevant to the query either.
  2. Response Conciseness: This evaluation checks whether the response is concise. This is important because the response should be concise and should not contain any unnecessary information.
results = llamaindex_object.evaluate(
    data=data,
    checks=[
        Evals.CONTEXT_RELEVANCE,
        Evals.RESPONSE_CONCISENESS
    ]
)
9

Display the results

pd.DataFrame(results)