How will this help?
Using Ollama you can run models like Llama, Gemma locally on your system. In this tutorial we will walk you though running evaluations on UpTrain using your local models hosted on Ollama.Prerequisites
- Install Ollama to your system, you can download it from here
-
Pull the model using the command:
For the list of models supported by Ollama you can refer here
- You can enter http://localhost:11434/ in your web browser to confirm Ollama is running
How to integrate?
First, let’s import the necessary packagesquestion
: The question you want to askcontext
: The context relevant to the questionresponse
: The response to the question
You can check if you have downloaded Stable LM 2 1.6B by running
!ollama list
Else you can download it by !ollama pull stablelm2
- Context Relevance: Evaluates how relevant the retrieved context is to the question specified.
- Response Conciseness: Evaluates how concise the generated response is or if it has any additional irrelevant information for the question asked..
- Response Relevance: Evaluates how relevant the generated response was to the question specified.