# UpTrain ## Docs - [Frequently Asked Questions](https://docs.uptrain.ai/faq/faq.md): Here are some frequently asked questions about UpTrain - [Evaluations](https://docs.uptrain.ai/getting-started/dashboard/evaluations.md) - [Experiments](https://docs.uptrain.ai/getting-started/dashboard/experiments.md) - [Getting Started](https://docs.uptrain.ai/getting-started/dashboard/getting_started.md) - [Create a Project](https://docs.uptrain.ai/getting-started/dashboard/project.md) - [Introduction](https://docs.uptrain.ai/getting-started/introduction.md): What is UpTrain? - [Quickstart](https://docs.uptrain.ai/getting-started/quickstart.md): Get started with UpTrain in a few simple steps - [Why Open Source?](https://docs.uptrain.ai/getting-started/why-open-source.md): Why we decided to open-source the LLM evaluations - [Why we are building UpTrain](https://docs.uptrain.ai/getting-started/why-we-are-building-uptrain.md): UpTrain's origination story - [Overview](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/callback_handler/overview.md) - [RAG Query Engine Evaluations](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/callback_handler/rag_query.md) - [Re-Ranking Evaluations](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/callback_handler/reranking.md) - [Sub-Question Query Generation Evaluation](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/callback_handler/subques_query.md) - [EvalLlamaIndex](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/evalllamaindex.md) - [LlamaIndex Overview](https://docs.uptrain.ai/integrations/framework/llamaindex-methods/overview.md) - [Helicone](https://docs.uptrain.ai/integrations/observation-tools/helicone.md) - [Langfuse](https://docs.uptrain.ai/integrations/observation-tools/langfuse.md) - [Zeno](https://docs.uptrain.ai/integrations/observation-tools/zeno.md) - [ChromaDB](https://docs.uptrain.ai/integrations/vector_db/chroma.md) - [FAISS](https://docs.uptrain.ai/integrations/vector_db/faiss.md) - [Qdrant](https://docs.uptrain.ai/integrations/vector_db/qdrant.md) - [Anyscale](https://docs.uptrain.ai/llms/anyscale.md) - [Azure](https://docs.uptrain.ai/llms/azure.md) - [Claude](https://docs.uptrain.ai/llms/claude.md) - [Mistral](https://docs.uptrain.ai/llms/mistral.md) - [Ollama](https://docs.uptrain.ai/llms/ollama.md) - [OpenAI](https://docs.uptrain.ai/llms/openai.md) - [Together AI](https://docs.uptrain.ai/llms/together_ai.md) - [Code hallucination](https://docs.uptrain.ai/predefined-evaluations/code-evals/code-hallucination.md): Checks whether the code present in the generated response is grounded by the context. - [Context Conciseness](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-conciseness.md): Evaluates the concise context cited from an original context for irrelevant information. - [Context Relevance](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-relevance.md): Evaluates how relevant the retrieved context is to the question specified. - [Context Reranking](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-reranking.md): Evaluates how efficient the reranked context is compared to the original context. - [Context Utilization](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-utilization.md): Measures how complete the generated response is for the question specified given the information provided in the context. - [Factual Accuracy](https://docs.uptrain.ai/predefined-evaluations/context-awareness/factual-accuracy.md): Checks whether the response generated is factually correct and grounded by the provided context. - [Guideline Adherence](https://docs.uptrain.ai/predefined-evaluations/conversation-evals/guideline-adherence.md): Grades how well the LLM adheres to a provided guideline when giving a response. - [Number of Turns](https://docs.uptrain.ai/predefined-evaluations/conversation-evals/number-of-turns.md): Counts the number of turns in a conversation. - [Query Resolution](https://docs.uptrain.ai/predefined-evaluations/conversation-evals/query-resolution.md): Evaluates the ability of the LLM to resolve the user's query. - [User Satisfaction](https://docs.uptrain.ai/predefined-evaluations/conversation-evals/user-satisfaction.md): Asseses the user satisfaction with the conversation - [Custom Python Evals](https://docs.uptrain.ai/predefined-evaluations/custom-evals/custom-eval.md): Write your own custom Python evaluations using UpTrain - [Custom Guideline](https://docs.uptrain.ai/predefined-evaluations/custom-evals/custom-guideline.md): Grades how well the LLM adheres to a provided guideline when giving a response. - [Custom Prompts](https://docs.uptrain.ai/predefined-evaluations/custom-evals/custom-prompt-eval.md): Allows you to create your own set of evaluations - [Response Matching](https://docs.uptrain.ai/predefined-evaluations/ground-truth-comparison/response-matching.md): Grades how well the response generated by the LLM aligns with the provided ground truth. - [Language Features](https://docs.uptrain.ai/predefined-evaluations/language-quality/fluency-and-coherence.md): Grades the quality and effectiveness of language in a response, focusing on factors such as clarity, coherence, conciseness, and overall communication. - [Tonality](https://docs.uptrain.ai/predefined-evaluations/language-quality/tonality.md): Evaluates whether the generated response matches the required persona's tone - [Overview](https://docs.uptrain.ai/predefined-evaluations/overview.md): Quickest way to perform evaluations on your data - [Multi-Query Accuracy](https://docs.uptrain.ai/predefined-evaluations/query-quality/multi-query-accuracy.md): Evaluates how accurately the variations of the query represent the same question. - [Sub-Query Completeness](https://docs.uptrain.ai/predefined-evaluations/query-quality/sub-query-completeness.md): Evaluates if the list of generated sub-questions comprehensively cover all aspects of the main question. - [Response Completeness](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-completeness.md): Checks whether the response has answered all the aspects of the question specified - [Response Conciseness](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-conciseness.md): Grades how concise the generated response is or if it has any additional irrelevant information for the question asked. - [Response Consistency](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-consistency.md): Assesses how consistent the response is with the question asked as well as with the context provided. - [Response Relevance](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-relevance.md): Measures how relevant the generated response was to the question specified. - [Response Validity](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-validity.md): Checks if the response generated is valid or not. A response is considered to be valid if it contains any information. - [Jailbreak Detection](https://docs.uptrain.ai/predefined-evaluations/safeguarding/jailbreak.md): Grades whether the user's prompt is an attempt to jailbreak (i.e. generate illegal or harmful responses) - [Prompt Injection](https://docs.uptrain.ai/predefined-evaluations/safeguarding/prompt-injection.md): Detects if the user is trying to make the model reveal its system prompts. - [Analyzing RAG Failure Cases](https://docs.uptrain.ai/tutorials/analyzing-failure-cases.md): Helps analyse failure causes in a RAG pipeline - [Experiments Evaluation Demo](https://docs.uptrain.ai/tutorials/experiments-evaluation-tutorial.md): Perform A/B testing on your data with UpTrain - [Open Source Evaluator](https://docs.uptrain.ai/tutorials/open-source-evaluator.md): Get started with UpTrain in a few simple steps - [OpenAI Evals](https://docs.uptrain.ai/tutorials/openai-evals.md): Performance OpenAI Evals using UpTrain - [Uptrain API Client](https://docs.uptrain.ai/tutorials/uptrain-api-client.md): How to run your evaluations remotely using the Uptrain API Client ## Optional - [Changelog](https://github.com/uptrain-ai/uptrain/releases) - [Blog](https://blog.uptrain.ai/) - [Contributing](https://github.com/uptrain-ai/uptrain/blob/main/CONTRIBUTING.md) - [Community](https://join.slack.com/t/uptraincommunity/shared_invite/zt-1yih3aojn-CEoR_gAh6PDSknhFmuaJeg) - [Contact Us](https://calendly.com/uptrain-sourabh/30min)