Mistral
You can use your Mistral API key to run LLM evaluations using UpTrain.
How to do it?
Install UpTrain
Create your data
You can define your data as a list of dictionaries to run evaluations on UpTrain
question
: The question you want to askcontext
: The context relevant to the questionresponse
: The response to the question
Enter your Mistral API Key
Create an EvalLLM Evaluator
The model name should start with mistral/
for UpTrain to recognize you are using Mistral.
For example if you are using mistral-tiny
, the model name should be mistral/mistral-tiny
Evaluate data using UpTrain
Now that we have our data, we can evaluate it using UpTrain. We use the evaluate
method to do this. This method takes the following arguments:
data
: The data you want to log and evaluatechecks
: The evaluations you want to perform on your data
We have used the following 3 metrics from UpTrain’s library:
- Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
- Response Relevance: Evaluates how relevant the generated response was to the question specified.
You can look at the complete list of UpTrain’s supported metrics here
Print the results
Sample response:
Was this page helpful?