claude-code-proxy: Use Anthropic Clients with OpenAI and Gemini Models

Summary
`claude-code-proxy` is a powerful proxy server that allows developers to use Anthropic clients, such as Claude Code, with various backend models including OpenAI, Gemini, or even Anthropic's own models. It provides seamless translation of API requests and responses, offering flexibility and control over your AI model choices. This tool is ideal for integrating different LLM providers without modifying existing Anthropic client code.
Repository Info
Introduction
claude-code-proxy
is an innovative proxy server designed to bridge the gap between Anthropic clients and various large language models (LLMs). This Python-based tool allows you to seamlessly use Anthropic clients, such as Claude Code, with powerful backends like OpenAI, Google Gemini, or even directly with Anthropic's own models, all powered by LiteLLM. It acts as a transparent intermediary, translating API requests and responses to ensure compatibility across different LLM providers.
Installation
Getting claude-code-proxy
up and running is straightforward, whether you prefer installing from source or using Docker.
Prerequisites
- An OpenAI API key.
- A Google AI Studio (Gemini) API key (if you plan to use Google as a provider).
- uv installed, a fast Python package installer.
From Source
- Clone the repository:
git clone https://github.com/1rgs/claude-code-proxy.git cd claude-code-proxy
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
uv
will manage dependencies when the server runs. - Configure Environment Variables:
Copy the example environment file and edit
.env
with your API keys and model preferences.cp .env.example .env
Key variables to configure include:
ANTHROPIC_API_KEY
: (Optional) For proxying to Anthropic models.OPENAI_API_KEY
: Your OpenAI API key.GEMINI_API_KEY
: Your Google AI Studio (Gemini) API key.PREFERRED_PROVIDER
: Set toopenai
(default),google
, oranthropic
.BIG_MODEL
: Model forsonnet
requests (e.g.,gpt-4o
,gemini-2.5-pro-preview-03-25
).SMALL_MODEL
: Model forhaiku
requests (e.g.,gpt-4o-mini
,gemini-2.0-flash
).
- Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
The
--reload
flag is optional, useful for development.
Docker
For a containerized setup, first download the example environment file and configure it:
curl -O .env https://raw.githubusercontent.com/1rgs/claude-code-proxy/refs/heads/main/.env.example
Then, use Docker Compose (recommended) or a direct Docker command.
With Docker Compose:
Create a docker-compose.yml
file:
services:
proxy:
image: ghcr.io/1rgs/claude-code-proxy:latest
restart: unless-stopped
env_file: .env
ports:
- 8082:8082
With a Docker command:
docker run -d --env-file .env -p 8082:8082 ghcr.io/1rgs/claude-code-proxy:latest
Examples
Once the proxy server is running, integrating it with your Anthropic clients is simple.
Using with Claude Code
- Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
- Connect to your proxy:
Set the
ANTHROPIC_BASE_URL
environment variable to point to your proxy server.ANTHROPIC_BASE_URL=http://localhost:8082 claude
Your Claude Code client will now route requests through
claude-code-proxy
, utilizing your configured backend models.
Customizing Model Mapping
claude-code-proxy
offers extensive control over how Anthropic models (haiku
, sonnet
) are mapped to your chosen backend LLMs. This is configured via environment variables in your .env
file.
Example 1: Default (Use OpenAI)
OPENAI_API_KEY="your-openai-key"
# GEMINI_API_KEY="your-google-key" # Needed for fallback if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default
Example 2: Prefer Google
GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref
Example 3: Use Direct Anthropic ("Just an Anthropic Proxy" Mode)
This mode allows you to use the proxy infrastructure while still using actual Anthropic models.
ANTHROPIC_API_KEY="sk-ant-..."
PREFERRED_PROVIDER="anthropic"
# BIG_MODEL and SMALL_MODEL are ignored in this mode
Example 4: Use Specific OpenAI Models
OPENAI_API_KEY="your-openai-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model for sonnet
SMALL_MODEL="gpt-4o-mini" # Example specific model for haiku
Why Use claude-code-proxy?
claude-code-proxy
provides several compelling advantages for developers working with LLMs:
- Unmatched Flexibility: Easily switch between OpenAI, Gemini, or Anthropic models without altering your client-side code. This allows you to experiment with different providers or leverage specific model strengths.
- Cost Optimization: By mapping Anthropic models to potentially more cost-effective alternatives like
gpt-4o-mini
orgemini-2.0-flash
, you can significantly reduce API expenses. - Seamless Integration: Maintain your existing Anthropic client workflows, such as those with Claude Code, while benefiting from a wider array of backend LLMs.
- Centralized Control: The proxy acts as a single point of entry, which can be extended for logging, monitoring, rate limiting, or other middleware functionalities.
Links
- GitHub Repository: https://github.com/1rgs/claude-code-proxy