Dashboard
Evaluations
What are Evaluations?
Using UpTrain you can run evaluations on 20+ pre-configured metrics like:
-
Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
-
Factual Accuracy: Evaluates whether the response generated is factually correct and grounded by the provided context.
-
Response Completeness: Evaluates whether the response has answered all the aspects of the question specified
You can look at the complete list of UpTrain’s supported metrics here
How does it work?
1
Create a new Project
Click on Create New Project
from Home
2
Enter Project Information
Project name:
Create a name for your projectDataset name:
Create a name for your datasetChoose File:
Upload your Dataset Sample Dataset:Select Evaluation LLM:
Select an LLM to run evaluationsUse same info to run Evaluations
: No (since we are running evaluations here) If you wish to run experiments, you can selectYes
and follow the steps here
3
Select Evaluations to Run
4
View Evaluations
You can see all the evaluations ran using UpTrain
You can also see individual logs
UpTrain Dashboard is currently in Beta version. We would love your feedback to improve it.
Was this page helpful?