Skip to main content

SDK Integration for Developers

The LeanMCP AI Gateway works with any OpenAI-compatible SDK or library. Simply change the base URL and API key to route requests through the gateway.
Key Insight: Any library that supports a custom baseURL or base_url parameter can use the LeanMCP AI Gateway.

Prerequisites

1

Get Credits

Purchase credits at ship.leanmcp.com
2

Create API Key

Create an API key at ship.leanmcp.com/api-keys with SDK permissions

Gateway Endpoints

ProviderBase URL
OpenAIhttps://aigateway.leanmcp.com/v1/openai
Anthropichttps://aigateway.leanmcp.com/v1/anthropic
xAI (Grok)https://aigateway.leanmcp.com/v1/xai
Fireworkshttps://aigateway.leanmcp.com/v1/fireworks
ElevenLabshttps://aigateway.leanmcp.com/v1/elevenlabs

Official SDKs

OpenAI SDK

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://aigateway.leanmcp.com/v1/openai',
  apiKey: process.env.LEANMCP_API_KEY, // leanmcp_xxx
});

const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
});

console.log(response.choices[0].message.content);

Anthropic SDK

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  baseURL: 'https://aigateway.leanmcp.com/v1/anthropic',
  apiKey: process.env.LEANMCP_API_KEY,
});

const response = await client.messages.create({
  model: 'claude-sonnet-4-5-20250929',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
});

console.log(response.content[0].text);

Streaming

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://aigateway.leanmcp.com/v1/openai',
  apiKey: process.env.LEANMCP_API_KEY,
});

const stream = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Write a poem' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Framework Integrations

LangChain

import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({
  modelName: 'gpt-5.2',
  configuration: {
    baseURL: 'https://aigateway.leanmcp.com/v1/openai',
    apiKey: process.env.LEANMCP_API_KEY,
  },
});

const response = await model.invoke('Hello!');
console.log(response.content);

Vercel AI SDK

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const customOpenAI = openai.configure({
  baseURL: 'https://aigateway.leanmcp.com/v1/openai',
  apiKey: process.env.LEANMCP_API_KEY,
});

const { text } = await generateText({
  model: customOpenAI('gpt-5.2'),
  prompt: 'Hello!',
});

console.log(text);

LlamaIndex

from llama_index.llms.openai import OpenAI
import os

llm = OpenAI(
    model="gpt-5.2",
    api_base="https://aigateway.leanmcp.com/v1/openai",
    api_key=os.environ.get("LEANMCP_API_KEY"),
)

response = llm.complete("Hello!")
print(response.text)

Adding Request Context

For better tracking and analytics, add custom headers to your requests:
const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: messages,
}, {
  headers: {
    'X-User-ID': userId,           // Track per-user usage
    'X-Session-ID': sessionId,     // Group requests by session
    'X-Request-Source': 'web-app', // Identify request source
    'X-Feature': 'chat',           // Tag by feature
  }
});
These headers appear in your dashboard and enable:
  • Per-user usage tracking and limits
  • Session-based request grouping
  • Feature-level analytics
  • Source attribution

OpenAI-Compatible Libraries

Any library that supports a custom base URL works with the LeanMCP AI Gateway:
LibraryConfiguration
openai (official)baseURL parameter
anthropic (official)baseURL parameter
langchainbase_url in configuration
llama-indexapi_base parameter
vercel/aibaseURL in configure
litellmapi_base parameter
guidanceCustom OpenAI client
instructorPass custom OpenAI client

Generic Pattern

// Any OpenAI-compatible library
const client = new SomeAILibrary({
  baseURL: 'https://aigateway.leanmcp.com/v1/openai', // or /anthropic, /xai, etc.
  apiKey: 'leanmcp_your_api_key',
});

Environment Setup

# .env file
LEANMCP_API_KEY=leanmcp_your_api_key_here
LEANMCP_OPENAI_BASE_URL=https://aigateway.leanmcp.com/v1/openai
LEANMCP_ANTHROPIC_BASE_URL=https://aigateway.leanmcp.com/v1/anthropic

Multiple Environments

const getBaseURL = () => {
  if (process.env.NODE_ENV === 'development') {
    return 'https://aigateway.leanmcp.com/v1/openai'; // Use gateway in dev
  }
  return 'https://api.openai.com/v1'; // Direct in production (optional)
};

const client = new OpenAI({
  baseURL: getBaseURL(),
  apiKey: process.env.OPENAI_API_KEY,
});

Error Handling

try {
  const response = await client.chat.completions.create({
    model: 'gpt-5.2',
    messages: [{ role: 'user', content: 'Hello' }],
  });
} catch (error) {
  if (error.status === 401) {
    console.error('Invalid API key');
  } else if (error.status === 402) {
    console.error('Insufficient credits');
  } else if (error.status === 429) {
    console.error('Rate limited');
  } else {
    console.error('API error:', error.message);
  }
}

Benefits for Developers

Unified Logging

All AI requests logged in one dashboard

User Tracking

Track usage per user with custom headers

Cost Attribution

Know which features drive AI costs

A/B Testing

Test different models and prompts

Security

Block malicious users and sensitive data

Rate Limiting

Set limits per user or globally

Next Steps