You can experiment with UpTrain on 20+ pre-configured evaluation metrics like:
Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
Factual Accuracy: Evaluates whether the response generated is factually correct and grounded by the provided context.
Response Completeness: Evaluates whether the response has answered all the aspects of the question specified
You can look at the complete list of UpTrain’s supported metrics here
Create a new Project
Click on Create New Project
from Home
Enter Project Information
Project name:
Create a name for your projectDataset name:
Create a name for your datasetChoose File:
Upload your Dataset
Sample Dataset:
Select Evaluation LLM:
Select an LLM to run evaluationsUse same info to run Evaluations
: Yes
If you do not wish to run experiments, you can select No
and follow the steps hereExperiment column:
Enter the column to run experiments onSelect Evaluations to Run
View Experiments
You can see all the evaluations ran using UpTrain
You can also see individual logs