What are Prompts?

You can manage your prompt iterations and experiment with them using UpTrain on 20+ pre-configured evaluation metrics like:

  1. Context Relevance: Evaluates how relevant the retrieved context is to the question specified.

  2. Factual Accuracy: Evaluates whether the response generated is factually correct and grounded by the provided context.

  3. Response Completeness: Evaluates whether the response has answered all the aspects of the question specified

You can look at the complete list of UpTrain’s supported metrics here

How does it work?

1

Create a new Project

Click on Create New Project from Home

2

Enter Project Information

  • Project name: Create a name for your project
  • Dataset name: Create a name for your dataset
  • Project Type: Select project type: Prompts
  • Choose File: Upload your Dataset Sample Dataset:
    {"question":"","response":"","context":""}
    {"question":"","response":"","context":""}
    
  • Evaluation LLM: Select an LLM to run evaluations
3

Enter your Prompt

4

Select Evaluations to Run

5

View Prompts

You can see all the evaluations ran on your prompts using UpTrain

UpTrain Dashboard is currently in Beta version. We would love your feedback to improve it.