OpenLLMetry: Open-Source Observability for LLM Applications with OpenTelemetry

Summary
OpenLLMetry provides open-source observability for Generative AI (GenAI) and Large Language Model (LLM) applications, built upon the OpenTelemetry standard. It offers comprehensive tracing and monitoring capabilities, allowing seamless integration with existing observability solutions like Datadog, Honeycomb, and Grafana. This project simplifies the process of gaining insights into your LLM-powered systems.
Repository Info
Tags
Click on any tag to explore related repositories
Introduction
OpenLLMetry, an open-source project by Traceloop, provides comprehensive observability for your Generative AI (GenAI) and Large Language Model (LLM) applications. Built on top of the OpenTelemetry standard, it enables developers to gain deep insights into their LLM-powered systems, ensuring performance, reliability, and efficient debugging. Written in Python, OpenLLMetry seamlessly integrates with your existing observability stack, offering a unified view of your application's behavior.
Installation
Getting started with OpenLLMetry is straightforward, especially using the Traceloop SDK.
First, install the SDK via pip:
pip install traceloop-sdk
Examples
Once installed, you can begin instrumenting your code with just a few lines. Add the following to your Python application:
from traceloop.sdk import Traceloop
Traceloop.init()
For local development, you might want to disable batch sending to see traces immediately:
from traceloop.sdk import Traceloop
Traceloop.init(disable_batch=True)
OpenLLMetry automatically instruments calls to popular LLM providers, vector databases, and AI frameworks, providing detailed traces and metrics without extensive manual configuration.
Why Use OpenLLMetry?
OpenLLMetry stands out by offering a robust, open-source solution for LLM observability. Its foundation in OpenTelemetry ensures vendor-neutral data collection, allowing you to connect to a wide array of observability backends. This includes popular platforms like Datadog, Honeycomb, Grafana, New Relic, and many others, enabling you to leverage your existing tools.
The project provides extensive instrumentation for:
- LLM Providers: OpenAI, Anthropic, Cohere, Mistral AI, HuggingFace, AWS Bedrock, Google Generative AI, and more.
- Vector Databases: Chroma, Pinecone, Qdrant, Weaviate, Milvus, etc.
- AI Frameworks: LangChain, LlamaIndex, Haystack, LiteLLM, CrewAI, and others.
By using OpenLLMetry, you gain critical visibility into the performance, latency, and token usage of your LLM interactions, facilitating better decision-making and optimization of your GenAI applications. The active community and clear documentation further support developers in implementing and extending their observability capabilities.
Links
- GitHub Repository: https://github.com/traceloop/openllmetry
- Official Website: https://www.traceloop.com/openllmetry
- Documentation: https://traceloop.com/docs/openllmetry/introduction
- Slack Community: https://traceloop.com/slack