SDK Integration for Developers
The LeanMCP AI Gateway works with any OpenAI-compatible SDK or library. Simply change the base URL and API key to route requests through the gateway.
Key Insight : Any library that supports a custom baseURL or base_url parameter can use the LeanMCP AI Gateway.
Prerequisites
Gateway Endpoints
Provider Base URL OpenAI https://aigateway.leanmcp.com/v1/openaiAnthropic https://aigateway.leanmcp.com/v1/anthropicxAI (Grok) https://aigateway.leanmcp.com/v1/xaiFireworks https://aigateway.leanmcp.com/v1/fireworksElevenLabs https://aigateway.leanmcp.com/v1/elevenlabs
Official SDKs
OpenAI SDK
TypeScript/JavaScript
Python
cURL
import OpenAI from 'openai' ;
const client = new OpenAI ({
baseURL: 'https://aigateway.leanmcp.com/v1/openai' ,
apiKey: process . env . LEANMCP_API_KEY , // leanmcp_xxx
});
const response = await client . chat . completions . create ({
model: 'gpt-5.2' ,
messages: [
{ role: 'user' , content: 'Hello!' }
],
});
console . log ( response . choices [ 0 ]. message . content );
from openai import OpenAI
import os
client = OpenAI(
base_url = "https://aigateway.leanmcp.com/v1/openai" ,
api_key = os.environ.get( "LEANMCP_API_KEY" ), # leanmcp_xxx
)
response = client.chat.completions.create(
model = "gpt-5.2" ,
messages = [
{ "role" : "user" , "content" : "Hello!" }
]
)
print (response.choices[ 0 ].message.content)
curl https://aigateway.leanmcp.com/v1/openai/chat/completions \
-H "Authorization: Bearer leanmcp_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Anthropic SDK
TypeScript/JavaScript
Python
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
baseURL: 'https://aigateway.leanmcp.com/v1/anthropic' ,
apiKey: process . env . LEANMCP_API_KEY ,
});
const response = await client . messages . create ({
model: 'claude-sonnet-4-5-20250929' ,
max_tokens: 1024 ,
messages: [
{ role: 'user' , content: 'Hello!' }
],
});
console . log ( response . content [ 0 ]. text );
import anthropic
import os
client = anthropic.Anthropic(
base_url = "https://aigateway.leanmcp.com/v1/anthropic" ,
api_key = os.environ.get( "LEANMCP_API_KEY" ),
)
response = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Hello!" }
]
)
print (response.content[ 0 ].text)
Streaming
OpenAI Streaming
Anthropic Streaming
import OpenAI from 'openai' ;
const client = new OpenAI ({
baseURL: 'https://aigateway.leanmcp.com/v1/openai' ,
apiKey: process . env . LEANMCP_API_KEY ,
});
const stream = await client . chat . completions . create ({
model: 'gpt-5.2' ,
messages: [{ role: 'user' , content: 'Write a poem' }],
stream: true ,
});
for await ( const chunk of stream ) {
process . stdout . write ( chunk . choices [ 0 ]?. delta ?. content || '' );
}
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
baseURL: 'https://aigateway.leanmcp.com/v1/anthropic' ,
apiKey: process . env . LEANMCP_API_KEY ,
});
const stream = client . messages . stream ({
model: 'claude-sonnet-4-5-20250929' ,
max_tokens: 1024 ,
messages: [{ role: 'user' , content: 'Write a poem' }],
});
for await ( const event of stream ) {
if ( event . type === 'content_block_delta' ) {
process . stdout . write ( event . delta . text );
}
}
Framework Integrations
LangChain
import { ChatOpenAI } from '@langchain/openai' ;
const model = new ChatOpenAI ({
modelName: 'gpt-5.2' ,
configuration: {
baseURL: 'https://aigateway.leanmcp.com/v1/openai' ,
apiKey: process . env . LEANMCP_API_KEY ,
},
});
const response = await model . invoke ( 'Hello!' );
console . log ( response . content );
from langchain_openai import ChatOpenAI
import os
model = ChatOpenAI(
model = "gpt-5.2" ,
base_url = "https://aigateway.leanmcp.com/v1/openai" ,
api_key = os.environ.get( "LEANMCP_API_KEY" ),
)
response = model.invoke( "Hello!" )
print (response.content)
Vercel AI SDK
import { openai } from '@ai-sdk/openai' ;
import { generateText } from 'ai' ;
const customOpenAI = openai . configure ({
baseURL: 'https://aigateway.leanmcp.com/v1/openai' ,
apiKey: process . env . LEANMCP_API_KEY ,
});
const { text } = await generateText ({
model: customOpenAI ( 'gpt-5.2' ),
prompt: 'Hello!' ,
});
console . log ( text );
LlamaIndex
from llama_index.llms.openai import OpenAI
import os
llm = OpenAI(
model = "gpt-5.2" ,
api_base = "https://aigateway.leanmcp.com/v1/openai" ,
api_key = os.environ.get( "LEANMCP_API_KEY" ),
)
response = llm.complete( "Hello!" )
print (response.text)
Adding Request Context
For better tracking and analytics, add custom headers to your requests:
const response = await client . chat . completions . create ({
model: 'gpt-5.2' ,
messages: messages ,
}, {
headers: {
'X-User-ID' : userId , // Track per-user usage
'X-Session-ID' : sessionId , // Group requests by session
'X-Request-Source' : 'web-app' , // Identify request source
'X-Feature' : 'chat' , // Tag by feature
}
});
These headers appear in your dashboard and enable:
Per-user usage tracking and limits
Session-based request grouping
Feature-level analytics
Source attribution
OpenAI-Compatible Libraries
Any library that supports a custom base URL works with the LeanMCP AI Gateway:
Library Configuration openai (official)baseURL parameteranthropic (official)baseURL parameterlangchain base_url in configurationllama-index api_base parametervercel/ai baseURL in configurelitellm api_base parameterguidance Custom OpenAI client instructor Pass custom OpenAI client
Generic Pattern
// Any OpenAI-compatible library
const client = new SomeAILibrary ({
baseURL: 'https://aigateway.leanmcp.com/v1/openai' , // or /anthropic, /xai, etc.
apiKey: 'leanmcp_your_api_key' ,
});
Environment Setup
Recommended: Environment Variables
# .env file
LEANMCP_API_KEY = leanmcp_your_api_key_here
LEANMCP_OPENAI_BASE_URL = https://aigateway.leanmcp.com/v1/openai
LEANMCP_ANTHROPIC_BASE_URL = https://aigateway.leanmcp.com/v1/anthropic
Multiple Environments
const getBaseURL = () => {
if ( process . env . NODE_ENV === 'development' ) {
return 'https://aigateway.leanmcp.com/v1/openai' ; // Use gateway in dev
}
return 'https://api.openai.com/v1' ; // Direct in production (optional)
};
const client = new OpenAI ({
baseURL: getBaseURL (),
apiKey: process . env . OPENAI_API_KEY ,
});
Error Handling
try {
const response = await client . chat . completions . create ({
model: 'gpt-5.2' ,
messages: [{ role: 'user' , content: 'Hello' }],
});
} catch ( error ) {
if ( error . status === 401 ) {
console . error ( 'Invalid API key' );
} else if ( error . status === 402 ) {
console . error ( 'Insufficient credits' );
} else if ( error . status === 429 ) {
console . error ( 'Rate limited' );
} else {
console . error ( 'API error:' , error . message );
}
}
Benefits for Developers
Unified Logging All AI requests logged in one dashboard
User Tracking Track usage per user with custom headers
Cost Attribution Know which features drive AI costs
A/B Testing Test different models and prompts
Security Block malicious users and sensitive data
Rate Limiting Set limits per user or globally
Next Steps
Security Features Block users and protect sensitive data
Token Optimization A/B testing and cost reduction
Observability Monitor all requests
For Developers Advanced developer features