Documentation Index Fetch the complete documentation index at: https://docs.leanmcp.com/llms.txt
Use this file to discover all available pages before exploring further.
AI Gateway
The LeanMCP AI Gateway provides a unified API proxy for multiple LLM providers with built-in authentication, token tracking, and observability.
Features
Multi-Provider Support OpenAI, Anthropic, xAI (Grok), Fireworks, ElevenLabs
Drop-in Replacement Use official SDKs or LangChain with minimal code changes
Authentication Firebase JWT or API key authentication
Session Tracking Track requests across sessions for observability
Supported Providers
Provider Endpoint Features OpenAI /v1/openai/*Chat, Vision, DALL-E, TTS, Structured Output Anthropic /v1/anthropic/*Messages, Vision, Structured Output xAI (Grok) /v1/xai/*Chat with Web Search Fireworks /v1/fireworks/*Open-source models (Llama, etc.) ElevenLabs /v1/elevenlabs/*Text-to-Speech
Authentication
All requests require authentication via the Authorization header:
Authorization: Bearer < your-toke n >
Supported token types:
Firebase JWT : Standard Firebase ID token from your app
API Key : LeanMCP API keys (prefixed with leanmcp_)
Usage Examples
OpenAI SDK
Use the OpenAI SDK by simply changing the baseURL:
import OpenAI from 'openai' ;
const client = new OpenAI ({
baseURL: 'https://aigateway.leanmcp.com/v1/openai/v1' ,
apiKey: 'your-leanmcp-token' , // Firebase JWT or API key
});
// Streaming chat completion
const stream = await client . chat . completions . create ({
model: 'gpt-5.2' ,
messages: [
{ role: 'user' , content: 'Write a haiku about APIs.' }
],
stream: true ,
});
for await ( const chunk of stream ) {
const content = chunk . choices [ 0 ]?. delta ?. content || '' ;
process . stdout . write ( content );
}
Anthropic SDK
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
baseURL: 'https://aigateway.leanmcp.com/v1/anthropic' ,
apiKey: 'your-leanmcp-token' ,
});
const stream = await client . messages . stream ({
model: 'claude-sonnet-4-5' ,
max_tokens: 300 ,
messages: [
{ role: 'user' , content: 'Write a haiku about cloud computing.' }
],
});
for await ( const event of stream ) {
if ( event . type === 'content_block_delta' && event . delta . type === 'text_delta' ) {
process . stdout . write ( event . delta . text );
}
}
LangChain (Python)
LangChain provides a powerful abstraction for building LLM applications. The AI Gateway works seamlessly with LangChain by simply changing the base_url parameter.
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Create ChatOpenAI with gateway configuration
llm = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai/v1" ,
api_key = "your-leanmcp-token" , # Firebase JWT or API key
temperature = 0.7 ,
)
# Basic chat completion
response = llm.invoke([
HumanMessage( content = "Write a haiku about APIs." )
])
print (response.content)
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
# Create ChatAnthropic with gateway configuration
llm = ChatAnthropic(
model = "claude-sonnet-4-5" ,
base_url = "https://aigateway.leanmcp.com/v1/anthropic" ,
api_key = "your-leanmcp-token" ,
temperature = 0.7 ,
max_tokens = 300 ,
)
response = llm.invoke([
HumanMessage( content = "Explain quantum computing briefly." )
])
print (response.content)
Streaming with LangChain
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai/v1" ,
api_key = "your-leanmcp-token" ,
streaming = True ,
)
# Token-by-token streaming
for chunk in llm.stream([
HumanMessage( content = "Explain quantum computing in 3 sentences." )
]):
print (chunk.content, end = "" , flush = True )
Structured Output with Pydantic
from typing import List
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Define Pydantic models
class Step ( BaseModel ):
step_number: int = Field( description = "The step number" )
title: str = Field( description = "Brief title of the step" )
description: str = Field( description = "Detailed description" )
class Recipe ( BaseModel ):
name: str = Field( description = "Name of the recipe" )
cuisine: str = Field( description = "Type of cuisine" )
prep_time_minutes: int = Field( description = "Prep time in minutes" )
ingredients: List[ str ] = Field( description = "List of ingredients" )
steps: List[Step] = Field( description = "Cooking steps" )
llm = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai/v1" ,
api_key = "your-leanmcp-token" ,
temperature = 0 ,
)
# Get structured output
structured_llm = llm.with_structured_output(Recipe)
recipe: Recipe = structured_llm.invoke([
HumanMessage( content = "Give me a recipe for pasta carbonara." )
])
print ( f "Recipe: { recipe.name } " )
print ( f "Ingredients: { recipe.ingredients } " )
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
@tool
def get_weather ( city : str , unit : str = "celsius" ) -> str :
"""Get the current weather for a city."""
# Your weather API logic here
return f "Weather in { city } : 22C, Sunny"
@tool
def calculate ( expression : str ) -> str :
"""Calculate a mathematical expression."""
return f "Result: { eval (expression) } "
llm = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai/v1" ,
api_key = "your-leanmcp-token" ,
temperature = 0 ,
)
# Bind tools to the model
tools = [get_weather, calculate]
llm_with_tools = llm.bind_tools(tools)
response = llm_with_tools.invoke([
HumanMessage( content = "What's the weather in London and calculate 25 * 4?" )
])
# Process tool calls
for tool_call in response.tool_calls:
print ( f "Tool: { tool_call[ 'name' ] } , Args: { tool_call[ 'args' ] } " )
Multi-Model Chains
Orchestrate multiple providers in a single LangChain workflow:
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Create both LLMs
openai_llm = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai/v1" ,
api_key = "your-leanmcp-token" ,
)
anthropic_llm = ChatAnthropic(
model = "claude-sonnet-4-5" ,
base_url = "https://aigateway.leanmcp.com/v1/anthropic" ,
api_key = "your-leanmcp-token" ,
max_tokens = 300 ,
)
# Step 1: OpenAI generates content
generator_prompt = ChatPromptTemplate.from_messages([
( "system" , "Generate a short story premise (2-3 sentences)." ),
( "human" , "Topic: {topic} " ),
])
# Step 2: Anthropic critiques
critic_prompt = ChatPromptTemplate.from_messages([
( "system" , "Review and suggest one improvement for this story premise." ),
( "human" , "Premise: {premise} " ),
])
output_parser = StrOutputParser()
# Build chains
generator_chain = generator_prompt | openai_llm | output_parser
critic_chain = critic_prompt | anthropic_llm | output_parser
# Execute multi-model workflow
premise = generator_chain.invoke({ "topic" : "a robot learning to dream" })
critique = critic_chain.invoke({ "premise" : premise})
print ( f "Generated: { premise } " )
print ( f "Critique: { critique } " )
Requirements : Install LangChain packages:pip install langchain-openai langchain-anthropic langchain-core
curl Examples
OpenAI Streaming
curl -N -X POST "https://aigateway.leanmcp.com/v1/openai/v1/chat/completions" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
Anthropic Messages
curl -X POST "https://aigateway.leanmcp.com/v1/anthropic/v1/messages" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-5",
"max_tokens": 300,
"messages": [{"role": "user", "content": "Hello!"}]
}'
xAI (Grok) with Web Search
curl -N -X POST "https://aigateway.leanmcp.com/v1/xai/v1/chat/completions" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-d '{
"model": "grok-2-latest",
"messages": [{"role": "user", "content": "What are the top tech news today?"}],
"stream": true,
"search_parameters": {"mode": "auto"}
}'
Fireworks (Llama)
curl -N -X POST "https://aigateway.leanmcp.com/v1/fireworks/inference/v1/chat/completions" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-d '{
"model": "accounts/fireworks/models/llama-v3p1-8b-instruct",
"messages": [{"role": "user", "content": "Explain quantum computing."}],
"stream": true,
"max_tokens": 100
}'
ElevenLabs TTS
curl -X POST "https://aigateway.leanmcp.com/v1/elevenlabs/v1/text-to-speech/21m00Tcm4TlvDq8ikWAM" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-o "output.mp3" \
-d '{
"text": "Hello! This is a test.",
"model_id": "eleven_monolingual_v1",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.5}
}'
Advanced Features
Structured Output (OpenAI)
const response = await fetch ( 'https://aigateway.leanmcp.com/v1/openai/v1/chat/completions' , {
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ token } ` ,
'Content-Type' : 'application/json' ,
},
body: JSON . stringify ({
model: 'gpt-5.2' ,
messages: [
{ role: 'user' , content: 'Recommend 3 sci-fi movies from the 2010s.' }
],
response_format: {
type: 'json_schema' ,
json_schema: {
name: 'movie_recommendations' ,
strict: true ,
schema: {
type: 'object' ,
properties: {
recommendations: {
type: 'array' ,
items: {
type: 'object' ,
properties: {
title: { type: 'string' },
year: { type: 'number' },
genre: { type: 'string' },
reason: { type: 'string' },
},
required: [ 'title' , 'year' , 'genre' , 'reason' ],
},
},
},
required: [ 'recommendations' ],
},
},
},
}),
});
Session Tracking
Include a session ID to track requests across a conversation:
curl -X POST "https://aigateway.leanmcp.com/v1/openai/v1/chat/completions" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "Content-Type: application/json" \
-H "leanmcp-session-id: my-session-123" \
-d '{"model": "gpt-5.2", "messages": [...]}'
Bring Your Own API Key
To use your own provider API key instead of platform keys:
curl -X POST "https://aigateway.leanmcp.com/v1/anthropic/v1/messages" \
-H "Authorization: Bearer $AUTH_TOKEN " \
-H "x-provider-api-key: sk-ant-your-key" \
-H "Content-Type: application/json" \
-d '{...}'