back to home

vibrantlabsai / ragas

Supercharge Your LLM Application Evaluations ๐Ÿš€

12,671 stars
1,255 forks
295 issues
PythonJupyter NotebookMakefile

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing vibrantlabsai/ragas in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/vibrantlabsai/ragas)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview
<h1 align="center"> <img style="vertical-align:middle" height="200" src="https://raw.githubusercontent.com/vibrantlabsai/ragas/main/docs/_static/imgs/logo.png"> </h1> <p align="center"> <i>Supercharge Your LLM Application Evaluations ๐Ÿš€</i> </p> <p align="center"> <a href="https://github.com/vibrantlabsai/ragas/releases"> <img alt="Latest release" src="https://img.shields.io/github/release/vibrantlabsai/ragas.svg"> </a> <a href="https://www.python.org/"> <img alt="Made with Python" src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple"> </a> <a href="https://github.com/vibrantlabsai/ragas/blob/master/LICENSE"> <img alt="License Apache-2.0" src="https://img.shields.io/github/license/vibrantlabsai/ragas.svg?color=green"> </a> <a href="https://pypi.org/project/ragas/"> <img alt="Ragas Downloads per month" src="https://static.pepy.tech/badge/ragas/month"> </a> <a href="https://discord.gg/5djav8GGNZ"> <img alt="Join Ragas community on Discord" src="https://img.shields.io/discord/1119637219561451644"> </a> <a target="_blank" href="https://deepwiki.com/vibrantlabsai/ragas"> <img src="https://devin.ai/assets/deepwiki-badge.png" alt="Ask DeepWiki.com" height="20" /> </a> </p> <h4 align="center"> <p> <a href="https://docs.ragas.io/">Documentation</a> | <a href="#fire-quickstart">Quick start</a> | <a href="https://discord.gg/5djav8GGNZ">Join Discord</a> | <a href="https://blog.ragas.io/">Blog</a> | <a href="https://newsletter.ragas.io/">NewsLetter</a> | <a href="https://www.ragas.io/careers">Careers</a> <p> </h4>

Objective metrics, intelligent test generation, and data-driven insights for LLM apps

Ragas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows. Don't have a test dataset ready? We also do production-aligned test set generation.

Key Features

  • ๐ŸŽฏ Objective Metrics: Evaluate your LLM applications with precision using both LLM-based and traditional metrics.
  • ๐Ÿงช Test Data Generation: Automatically create comprehensive test datasets covering a wide range of scenarios.
  • ๐Ÿ”— Seamless Integrations: Works flawlessly with popular LLM frameworks like LangChain and major observability tools.
  • ๐Ÿ“Š Build feedback loops: Leverage production data to continually improve your LLM applications.

:shield: Installation

Pypi:

pip install ragas

Alternatively, from source:

pip install git+https://github.com/vibrantlabsai/ragas

:fire: Quickstart

Clone a Complete Example Project

The fastest way to get started is to use the ragas quickstart command:

# List available templates
ragas quickstart

# Create a RAG evaluation project
ragas quickstart rag_eval

# Specify where you want to create it.
ragas quickstart rag_eval -o ./my-project

Available templates:

  • rag_eval - Evaluate RAG systems

Coming Soon:

  • agent_evals - Evaluate AI agents
  • benchmark_llm - Benchmark and compare LLMs
  • prompt_evals - Evaluate prompt variations
  • workflow_eval - Evaluate complex workflows

Evaluate your LLM App

ragas comes with pre-built metrics for common evaluation tasks. For example, Aspect Critique evaluates any aspect of your output using DiscreteMetric:

import asyncio
from openai import AsyncOpenAI
from ragas.metrics import DiscreteMetric
from ragas.llms import llm_factory

# Setup your LLM
client = AsyncOpenAI()
llm = llm_factory("gpt-4o", client=client)

# Create a custom aspect evaluator
metric = DiscreteMetric(
    name="summary_accuracy",
    allowed_values=["accurate", "inaccurate"],
    prompt="""Evaluate if the summary is accurate and captures key information.

Response: {response}

Answer with only 'accurate' or 'inaccurate'."""
)

# Score your application's output
async def main():
    score = await metric.ascore(
        llm=llm,
        response="The summary of the text is..."
    )
    print(f"Score: {score.value}")  # 'accurate' or 'inaccurate'
    print(f"Reason: {score.reason}")


if __name__ == "__main__":
    asyncio.run(main())

Note: Make sure your OPENAI_API_KEY environment variable is set.

Find the complete Quickstart Guide

Want help in improving your AI application using evals?

In the past 2 years, we have seen and helped improve many AI applications using evals. If you want help with improving and scaling up your AI application using evals.

๐Ÿ”— Book a slot or drop us a line: founders@vibrantlabs.com.

๐Ÿซ‚ Community

If you want to get more involved with Ragas, check out our discord server. It's a fun community where we geek out about LLM, Retrieval, Production issues, and more.

Contributors

+----------------------------------------------------------------------------+
|     +----------------------------------------------------------------+     |
|     | Developers: Those who built with `ragas`.                      |     |
|     | (You have `import ragas` somewhere in your project)            |     |
|     |     +----------------------------------------------------+     |     |
|     |     | Contributors: Those who make `ragas` better.       |     |     |
|     |     | (You make PR to this repo)                         |     |     |
|     |     +----------------------------------------------------+     |     |
|     +----------------------------------------------------------------+     |
+----------------------------------------------------------------------------+

We welcome contributions from the community! Whether it's bug fixes, feature additions, or documentation improvements, your input is valuable.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

๐Ÿ” Open Analytics

At Ragas, we believe in transparency. We collect minimal, anonymized usage data to improve our product and guide our development efforts.

โœ… No personal or company-identifying information

โœ… Open-source data collection code

โœ… Publicly available aggregated data

To opt-out, set the RAGAS_DO_NOT_TRACK environment variable to true.

Cite Us

@misc{ragas2024,
  author       = {VibrantLabs},
  title        = {Ragas: Supercharge Your LLM Application Evaluations},
  year         = {2024},
  howpublished = {\url{https://github.com/vibrantlabsai/ragas}},
}