Documentation Index Fetch the complete documentation index at: https://docs.leanmcp.com/llms.txt
Use this file to discover all available pages before exploring further.
Converting APIs to MCPs looks trivial. Many tools exist that auto-convert OpenAPI specs directly into MCPs. But direct conversion is problematic — what works for developers often fails miserably for AI agents.
The Problem with Direct Conversion
APIs are designed to return comprehensive data. MCPs need to return minimal, relevant data .
Example: Web Scraper API
Take an API like Apify’s web scraper. When you search, it returns:
All search result links
Full HTML of each page
Metadata, timestamps, pagination info
10-20 results per request
Now imagine feeding this to an LLM. You’re dumping entire HTML pages into the context window. The LLM drowns in irrelevant content and either:
Forgets earlier context
Hits token limits
Produces poor results
Same Problem Everywhere
Data Source API Returns MCP Should Return Web scraper Full HTML pages Extracted text snippets Database All matching rows Top N relevant results HubSpot/CRM Full contact records Key fields only Search APIs Paginated results Summarized highlights
Direct API → MCP conversion ignores this fundamental difference.
Solution 1: Summarize Before Returning
Don’t return raw API responses. Process them first.
Option A: Pre-computed Summaries
Store summaries alongside your data:
// Your database already has summaries
const results = await db . query ( `
SELECT id, title, summary_slug, key_metrics
FROM articles
WHERE topic = ?
LIMIT 5
` , [ input . topic ]);
// Return only the pre-computed summary fields
return results . map ( r => ({
id: r . id ,
title: r . title ,
summary: r . summary_slug // Already computed, stored in DB
}));
Option B: On-the-fly Summarization
Use a small/nano LLM to summarize before returning:
import OpenAI from 'openai' ;
const openai = new OpenAI ();
@ Tool ({ description: "Search articles and return summaries" })
async searchArticles ( input : { query: string }) {
// Fetch from your API
const rawResults = await api . search ( input . query );
// Summarize each result with a fast, cheap model
const summaries = await Promise . all (
rawResults . slice ( 0 , 5 ). map ( async ( item ) => {
const summary = await openai . chat . completions . create ({
model: "gpt-5.2" , // Fast, cheap nano model
messages: [{
role: "user" ,
content: `Summarize in 2 sentences: ${ item . content . slice ( 0 , 2000 ) } `
}],
max_tokens: 100
});
return {
id: item . id ,
title: item . title ,
summary: summary . choices [ 0 ]. message . content
};
})
);
return summaries ;
}
Nano models to consider:
gpt-5.2 — Fast, cheap, good quality
claude-haiku-4-5-20251001 — Anthropic’s fastest model
gemini-1.5-flash — Google’s speed-optimized model
Local models via Ollama for zero-cost summarization
Solution 2: Build a Layer on Top
Don’t modify your existing API. Build an MCP-optimized layer on top:
Your existing API continues serving developers. Your MCP layer:
Calls the same API
Processes/summarizes responses
Returns minimal, relevant data
Authentication: Use Your Existing Auth
Don’t create a separate auth system for MCPs. Use the same OAuth server that authenticates your existing users.
What You Need
Component Description OAuth Client ID Your application’s client identifier OAuth Client Secret Your application’s secret (keep secure!) OAuth Server URL Your auth provider’s token endpoint Redirect URI Where to redirect after authentication Scopes Permissions the MCP needs
Same Auth, Same Data
Implementation with @leanmcp/auth
import { Service , Tool } from 'leanmcp' ;
import { OAuth , Protected , AuthUser } from '@leanmcp/auth' ;
@ Service ()
@ OAuth ({
provider: 'custom' ,
clientId: process . env . OAUTH_CLIENT_ID ,
clientSecret: process . env . OAUTH_CLIENT_SECRET ,
authorizationUrl: 'https://your-auth-server.com/authorize' ,
tokenUrl: 'https://your-auth-server.com/token' ,
scopes: [ 'read:articles' , 'write:comments' ]
})
export class ArticleService {
@ Tool ({ description: "Get user's saved articles" })
@ Protected ()
async getSavedArticles (@ AuthUser () user : any ) {
// User is authenticated with your existing OAuth
// Same token that works with your web app
return await api . getArticles ({ userId: user . sub });
}
}
Defining Scopes
Define scopes based on what the MCP actually needs:
// Don't request all scopes
scopes : [ 'read' , 'write' , 'delete' , 'admin' ] // ❌ Too broad
// Request only what's needed
scopes : [ 'read:articles' , 'read:profile' ] // ✅ Minimal
Complete Example: HubSpot Integration
import { Service , Tool } from 'leanmcp' ;
import { OAuth , Protected , AuthUser } from '@leanmcp/auth' ;
@ Service ()
@ OAuth ({
provider: 'custom' ,
clientId: process . env . HUBSPOT_CLIENT_ID ,
clientSecret: process . env . HUBSPOT_CLIENT_SECRET ,
authorizationUrl: 'https://app.hubspot.com/oauth/authorize' ,
tokenUrl: 'https://api.hubapi.com/oauth/v1/token' ,
scopes: [ 'crm.objects.contacts.read' ]
})
export class HubSpotMCP {
@ Tool ({ description: "Search contacts" })
@ Protected ()
async searchContacts (
@ AuthUser () user : any ,
input : { query : string }
) {
// Fetch from HubSpot API
const contacts = await hubspot . crm . contacts . searchApi . doSearch ({
query: input . query ,
limit: 10 ,
properties: [ 'firstname' , 'lastname' , 'email' , 'company' ]
});
// Return only essential fields (not full contact records)
return contacts . results . map ( c => ({
id: c . id ,
name: ` ${ c . properties . firstname } ${ c . properties . lastname } ` ,
email: c . properties . email ,
company: c . properties . company
}));
}
}
Summary
Don’t Do Auto-convert OpenAPI to MCP Build an optimized MCP layer Return raw API responses Summarize/filter before returning Create separate MCP auth Use your existing OAuth server Request all scopes Request minimal scopes needed Return full records Return essential fields only
Reducing Tokens More on optimizing MCP responses
Auth Examples See authentication in action