MCPs aren’t just for SaaS developers. They’re a flexible foundation for building AI-powered applications across different contexts. Here’s where MCPs truly shine.
If you have an existing SaaS with APIs and a database, MCPs let you expose your platform to AI agents without building everything from scratch.
The Problem
Your competitors are building AI agents. You could:
Build your own agent from scratch (expensive, time-consuming)
Let users export data to other tools (lose control, security risks)
Do nothing (fall behind)
The MCP Solution
Build an MCP that wraps your existing APIs. Now:
Your data stays yours — no exports needed
Users get AI agent support — through your MCP
You control access — auth, scopes, permissions built-in
Building an Agent with MCPs
The agent pattern is simple — it’s just a loop:
You can build this with OpenAI or Anthropic in 10-20 minutes :
import Anthropic from '@anthropic-ai/sdk' ;
import { MCPClient } from '@leanmcp/client' ;
const anthropic = new Anthropic ();
const mcp = new MCPClient ( 'http://your-mcp-server.com' );
// Get tools from your MCP
const tools = await mcp . listTools ();
async function runAgent ( userMessage : string ) {
const messages = [{ role: 'user' , content: userMessage }];
// Agent loop
while ( true ) {
const response = await anthropic . messages . create ({
model: 'claude-sonnet-4-5-20250929' ,
max_tokens: 1024 ,
tools: tools , // Your MCP tools
messages
});
// Check for tool calls
const toolCalls = response . content . filter ( c => c . type === 'tool_use' );
if ( toolCalls . length === 0 ) {
// No more tool calls - return final response
return response . content . find ( c => c . type === 'text' )?. text ;
}
// Execute tool calls via MCP
for ( const call of toolCalls ) {
const result = await mcp . callTool ( call . name , call . input );
messages . push ({
role: 'tool' ,
tool_use_id: call . id ,
content: JSON . stringify ( result )
});
}
}
}
Custom Tool Calls MCP Auth per tool Auth built into protocol Scope management manual Scopes via @leanmcp/auth Users locked to your app Users can use MCP elsewhere Rebuild for each LLM Works with any LLM
Key advantage: If users want to use their data in other tools (Cursor, Claude Desktop, custom apps), they can connect your MCP directly. No data export needed.
2. AI Agent Startups: Build MVPs Fast
If you’re building an AI agent startup, MCPs are the fastest path to an MVP .
The Traditional Approach
Build tool call handlers
Wire up OpenAI/Anthropic
Build your agent loop
Create test infrastructure
Deploy and iterate
The MCP Approach
Build an MCP with your tools, APIs, resources
Add prompts for different behaviors (A/B testing)
Test in Claude Desktop immediately
Deploy when ready
A/B Testing Prompts
Add multiple prompts to your MCP for testing different behaviors:
@ Prompt ({ description: "Prompt A - Concise responses" })
promptA () {
return {
messages: [{
role: "user" ,
content: { type: "text" , text: "Be concise. One sentence answers." }
}]
};
}
@ Prompt ({ description: "Prompt B - Detailed explanations" })
promptB () {
return {
messages: [{
role: "user" ,
content: { type: "text" , text: "Provide detailed explanations with examples." }
}]
};
}
@ Prompt ({ description: "Prompt C - Step by step" })
promptC () {
return {
messages: [{
role: "user" ,
content: { type: "text" , text: "Break down responses into numbered steps." }
}]
};
}
Test each prompt in Claude Desktop and see which performs best — no code changes needed .
Why MCP for MVPs?
Benefit How Fast iteration Change prompts without redeploying Test anywhere Claude Desktop, Cursor, Windsurf Production-ready Same MCP works in production No vendor lock-in Switch LLMs easily
For large enterprises with internal agents, MCPs provide the security, access control, and auditability you need.
The Enterprise Challenge
Different teams need different data access
SSO integration required
Scope management per user/team
Audit trail for compliance
Works with enterprise LLM providers
MCP + Enterprise Auth
Implementation
import { AuthProvider , Authenticated } from "@leanmcp/auth" ;
// Connect to your internal SSO
const authProvider = new AuthProvider ( 'custom' , {
jwksUri: 'https://your-sso.company.com/.well-known/jwks.json' ,
issuer: 'https://your-sso.company.com' ,
audience: 'internal-mcp'
});
await authProvider . init ();
@ Authenticated ( authProvider )
export class InternalDataService {
@ Tool ({ description: "Get team data" })
async getTeamData ( input : { dataType : string }) {
// authUser contains SSO claims including groups/scopes
const userTeam = authUser [ 'groups' ]?.[ 0 ];
const allowedScopes = authUser [ 'scopes' ] || [];
// Check if user has access to requested data
if ( ! allowedScopes . includes ( `read: ${ input . dataType } ` )) {
return { error: "Access denied" , requiredScope: `read: ${ input . dataType } ` };
}
// Fetch data based on team membership
return await internalDb . getData ({
team: userTeam ,
type: input . dataType ,
requestedBy: authUser . sub // Audit trail
});
}
}
Works with Enterprise LLM Providers
Provider Integration OpenAI Enterprise Same MCP, enterprise API keys Anthropic Enterprise Same MCP, enterprise agreement AWS Bedrock Same MCP, Claude on AWS Azure OpenAI Same MCP, Azure endpoints
Key benefit: You don’t rebuild your agent for each LLM provider. The MCP stays the same — only the LLM connection changes.
Summary: When to Use MCPs
Use Case Why MCP SaaS Developer Expose platform to AI, keep data control, auth built-in AI Agent Startup Fast MVPs, test in existing tools, no vendor lock-in Enterprise Internal SSO integration, scope management, audit trails
Bottom line: If you’re building anything that connects AI to data or actions, MCPs give you auth, scopes, flexibility, and portability — all built into the protocol.
Add Authentication Secure your MCP
Build Your First MCP Get started now