Skip to main content
MCPs aren’t just for SaaS developers. They’re a flexible foundation for building AI-powered applications across different contexts. Here’s where MCPs truly shine.

1. SaaS Developers: Expose Your Platform to AI

If you have an existing SaaS with APIs and a database, MCPs let you expose your platform to AI agents without building everything from scratch.

The Problem

Your competitors are building AI agents. You could:
  • Build your own agent from scratch (expensive, time-consuming)
  • Let users export data to other tools (lose control, security risks)
  • Do nothing (fall behind)

The MCP Solution

Build an MCP that wraps your existing APIs. Now:
  • Your data stays yours — no exports needed
  • Users get AI agent support — through your MCP
  • You control access — auth, scopes, permissions built-in

Building an Agent with MCPs

The agent pattern is simple — it’s just a loop: You can build this with OpenAI or Anthropic in 10-20 minutes:
import Anthropic from '@anthropic-ai/sdk';
import { MCPClient } from '@leanmcp/client';

const anthropic = new Anthropic();
const mcp = new MCPClient('http://your-mcp-server.com');

// Get tools from your MCP
const tools = await mcp.listTools();

async function runAgent(userMessage: string) {
  const messages = [{ role: 'user', content: userMessage }];
  
  // Agent loop
  while (true) {
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 1024,
      tools: tools,  // Your MCP tools
      messages
    });
    
    // Check for tool calls
    const toolCalls = response.content.filter(c => c.type === 'tool_use');
    
    if (toolCalls.length === 0) {
      // No more tool calls - return final response
      return response.content.find(c => c.type === 'text')?.text;
    }
    
    // Execute tool calls via MCP
    for (const call of toolCalls) {
      const result = await mcp.callTool(call.name, call.input);
      messages.push({
        role: 'tool',
        tool_use_id: call.id,
        content: JSON.stringify(result)
      });
    }
  }
}

Why MCP Over Custom Tool Calls?

Custom Tool CallsMCP
Auth per toolAuth built into protocol
Scope management manualScopes via @leanmcp/auth
Users locked to your appUsers can use MCP elsewhere
Rebuild for each LLMWorks with any LLM
Key advantage: If users want to use their data in other tools (Cursor, Claude Desktop, custom apps), they can connect your MCP directly. No data export needed.

2. AI Agent Startups: Build MVPs Fast

If you’re building an AI agent startup, MCPs are the fastest path to an MVP.

The Traditional Approach

  1. Build tool call handlers
  2. Wire up OpenAI/Anthropic
  3. Build your agent loop
  4. Create test infrastructure
  5. Deploy and iterate

The MCP Approach

  1. Build an MCP with your tools, APIs, resources
  2. Add prompts for different behaviors (A/B testing)
  3. Test in Claude Desktop immediately
  4. Deploy when ready

A/B Testing Prompts

Add multiple prompts to your MCP for testing different behaviors:
@Prompt({ description: "Prompt A - Concise responses" })
promptA() {
  return {
    messages: [{
      role: "user",
      content: { type: "text", text: "Be concise. One sentence answers." }
    }]
  };
}

@Prompt({ description: "Prompt B - Detailed explanations" })
promptB() {
  return {
    messages: [{
      role: "user",
      content: { type: "text", text: "Provide detailed explanations with examples." }
    }]
  };
}

@Prompt({ description: "Prompt C - Step by step" })
promptC() {
  return {
    messages: [{
      role: "user",
      content: { type: "text", text: "Break down responses into numbered steps." }
    }]
  };
}
Test each prompt in Claude Desktop and see which performs best — no code changes needed.

Why MCP for MVPs?

BenefitHow
Fast iterationChange prompts without redeploying
Test anywhereClaude Desktop, Cursor, Windsurf
Production-readySame MCP works in production
No vendor lock-inSwitch LLMs easily

3. Enterprise: Internal Tooling & Agents

For large enterprises with internal agents, MCPs provide the security, access control, and auditability you need.

The Enterprise Challenge

  • Different teams need different data access
  • SSO integration required
  • Scope management per user/team
  • Audit trail for compliance
  • Works with enterprise LLM providers

MCP + Enterprise Auth

Implementation

import { AuthProvider, Authenticated } from "@leanmcp/auth";

// Connect to your internal SSO
const authProvider = new AuthProvider('custom', {
  jwksUri: 'https://your-sso.company.com/.well-known/jwks.json',
  issuer: 'https://your-sso.company.com',
  audience: 'internal-mcp'
});

await authProvider.init();

@Authenticated(authProvider)
export class InternalDataService {
  
  @Tool({ description: "Get team data" })
  async getTeamData(input: { dataType: string }) {
    // authUser contains SSO claims including groups/scopes
    const userTeam = authUser['groups']?.[0];
    const allowedScopes = authUser['scopes'] || [];
    
    // Check if user has access to requested data
    if (!allowedScopes.includes(`read:${input.dataType}`)) {
      return { error: "Access denied", requiredScope: `read:${input.dataType}` };
    }
    
    // Fetch data based on team membership
    return await internalDb.getData({
      team: userTeam,
      type: input.dataType,
      requestedBy: authUser.sub  // Audit trail
    });
  }
}

Works with Enterprise LLM Providers

ProviderIntegration
OpenAI EnterpriseSame MCP, enterprise API keys
Anthropic EnterpriseSame MCP, enterprise agreement
AWS BedrockSame MCP, Claude on AWS
Azure OpenAISame MCP, Azure endpoints
Key benefit: You don’t rebuild your agent for each LLM provider. The MCP stays the same — only the LLM connection changes.

Summary: When to Use MCPs

Use CaseWhy MCP
SaaS DeveloperExpose platform to AI, keep data control, auth built-in
AI Agent StartupFast MVPs, test in existing tools, no vendor lock-in
Enterprise InternalSSO integration, scope management, audit trails
Bottom line: If you’re building anything that connects AI to data or actions, MCPs give you auth, scopes, flexibility, and portability — all built into the protocol.