Ollama
Ollama is a great solution to run large language models (LLMs) on your local system.
How will this help?
Using Ollama you can run models like Llama, Gemma locally on your system.
In this tutorial we will walk you though running evaluations on UpTrain using your local models hosted on Ollama.
Prerequisites
-
Install Ollama to your system, you can download it from here
-
Pull the model using the command:
For the list of models supported by Ollama you can refer here
-
You can enter http://localhost:11434/ in your web browser to confirm Ollama is running
How to integrate?
First, let’s import the necessary packages
Create your data
You can define your data as a simple dictionary with the following keys:
question
: The question you want to askcontext
: The context relevant to the questionresponse
: The response to the question
Define the model
We will be using Stable LM 2 1.6B for this example. You can refer the documentation on Ollama.
Remember to add “ollama/” at the beginning of the model name to let UpTrain know that you are using an Ollama model.
!ollama list
Else you can download it by !ollama pull stablelm2
Create an EvalLLM Evaluator
Before we can start using UpTrain, we need to create an EvalLLM Evaluator.
We have used the following 3 metrics from UpTrain’s library:
-
Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
-
Response Conciseness: Evaluates how concise the generated response is or if it has any additional irrelevant information for the question asked..
-
Response Relevance: Evaluates how relevant the generated response was to the question specified.
You can look at the complete list of UpTrain’s supported metrics here
View your results
Sample Reponse:
Was this page helpful?