Dashboard
Evaluations
What are Evaluations?
Using UpTrain you can run evaluations on 20+ pre-configured metrics like:
-
Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
-
Factual Accuracy: Evaluates whether the response generated is factually correct and grounded by the provided context.
-
Response Completeness: Evaluates whether the response has answered all the aspects of the question specified
You can look at the complete list of UpTrain’s supported metrics here
How does it work?
1
Create a new Project
Click on Create New Project
from Home
![](https://mintlify.s3-us-west-1.amazonaws.com/uptrain/assets/dashboard/dashboard_home.png)
2
Enter Project Information
![](https://mintlify.s3-us-west-1.amazonaws.com/uptrain/assets/dashboard/dashboard_project1.png)
Project name:
Create a name for your projectDataset name:
Create a name for your datasetProject Type:
Select project type:Evaluations
Choose File:
Upload your Dataset Sample Dataset:{"question":"","response":"","context":""} {"question":"","response":"","context":""}
Evaluation LLM:
Select an LLM to run evaluations
3
Select Evaluations to Run
![](https://mintlify.s3-us-west-1.amazonaws.com/uptrain/assets/dashboard/eval_select_metrics.png)
4
View Evaluations
You can see all the evaluations ran using UpTrain
![](https://mintlify.s3-us-west-1.amazonaws.com/uptrain/assets/dashboard/eval.png)
You can also see individual logs
![](https://mintlify.s3-us-west-1.amazonaws.com/uptrain/assets/dashboard/eval_logs.png)
UpTrain Dashboard is currently in Beta version. We would love your feedback to improve it.
Was this page helpful?