Rig: Build Modular and Scalable LLM Applications in Rust

Rig: Build Modular and Scalable LLM Applications in Rust

Summary

Rig is a powerful Rust library designed for building modular, scalable, and ergonomic LLM-powered applications. It offers extensive features, including agentic workflows, compatibility with over 20 model providers, and seamless integration with more than 10 vector stores. Developers can leverage Rig to create robust generative AI solutions with minimal boilerplate.

Repository Info

Updated on October 12, 2025
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

Rig is an innovative Rust library developed by 0xPlaygrounds, designed to empower developers in building modular, scalable, and ergonomic Large Language Model (LLM) powered applications. It provides a robust framework for integrating generative AI capabilities into your projects, focusing on performance and developer experience inherent to the Rust ecosystem. Rig simplifies complex LLM workflows, offering a unified interface for various model providers and vector store integrations.

For more detailed information, explore the official documentation.

Installation

Getting started with Rig is straightforward. You can add the rig-core crate to your Rust project using cargo:

cargo add rig-core

Note that for asynchronous operations, you might need to enable tokio features like macros and rt-multi-thread or full.

Examples

Here's a simple example demonstrating how to use Rig with OpenAI to prompt a GPT-4 model:

use rig::{completion::Prompt, providers::openai};

#[tokio::main]
async fn main() {
    // Create OpenAI client and model
    // This requires the `OPENAI_API_KEY` environment variable to be set.
    let openai_client = openai::Client::from_env();

    let gpt4 = openai_client.agent("gpt-4").build();

    // Prompt the model and print its response
    let response = gpt4
        .prompt("Who are you?")
        .await
        .expect("Failed to prompt GPT-4");

    println!("GPT-4: {response}");
}

More examples and detailed use cases can be found in the examples directories of each crate and on the Rig official documentation.

Why Use Rig?

Rig stands out as a comprehensive solution for developing advanced LLM applications in Rust due to its rich feature set and focus on developer efficiency:

  • Agentic Workflows: Supports complex multi-turn streaming and prompting for sophisticated AI agents.
  • Unified Interfaces: Offers a singular, unified interface for over 20 model providers and 10+ vector store integrations, simplifying development.
  • Comprehensive Capabilities: Beyond LLM completion and embedding, Rig supports transcription, audio generation, and image generation model capabilities.
  • Scalability & Modularity: Built for scalable and modular architectures, allowing for flexible and maintainable AI applications.
  • Industry Standards: Full compatibility with GenAI Semantic Convention ensures robust observability and integration.
  • WASM Compatibility: The core library supports WebAssembly, opening doors for client-side AI applications.
  • Production Ready: Already adopted by various projects and companies like Dria Compute Node, Linera Protocol, and Nethermind's NINE, demonstrating its reliability in production environments.

Rig enables developers to integrate powerful LLMs into their applications with minimal boilerplate, accelerating the development of next-generation AI solutions.

Links