Skip to main content

Cursor Integration

Cursor is an AI-powered code editor. You can route Cursor’s AI requests through the LeanMCP AI Gateway to gain visibility into what code is being sent to AI providers.

Prerequisites

1

Get Credits

Purchase credits at ship.leanmcp.com
2

Create API Key

Create an API key at ship.leanmcp.com/api-keys with SDK permissions

Configuration

1

Open Cursor Settings

Press Cmd + , (Mac) or Ctrl + , (Windows/Linux) to open SettingsOr go to Cursor > Settings > Cursor Settings
2

Navigate to Models

Click on Models in the left sidebar
3

Configure OpenAI-Compatible Endpoint

Find the OpenAI API Key section and configure:API Key:
leanmcp_your_api_key_here
Override OpenAI Base URL:
https://aigateway.leanmcp.com/v1/openai
Cursor settings
4

Save and Restart

Save your settings and restart Cursor for changes to take effect.

Verifying the Setup

  1. Open any file in Cursor
  2. Use Cmd+K (or Ctrl+K) to open the AI prompt
  3. Ask a simple question like “What does this file do?”
  4. Check your LeanMCP Dashboard to see the request logged
Verify Cursor setup

What You Can See

Once configured, you’ll be able to see in your LeanMCP dashboard:
  • Full context sent - exactly what code Cursor includes in each request
  • Token usage - how many tokens each request uses
  • Model used - which AI model processed your request
  • Sensitive data - any API keys, passwords, or PII detected in your code

Supported Models

Through the LeanMCP AI Gateway, Cursor can access:
ProviderModels
OpenAIGPT-5.2, GPT-5.2, GPT-5.2
AnthropicClaude 3.5 Sonnet, Claude 3 Opus
Cursor primarily uses the OpenAI endpoint. For Anthropic models, you may need to configure a separate API endpoint.

Troubleshooting

  • Verify your API key is correct
  • Check the base URL is exactly https://aigateway.leanmcp.com/v1/openai
  • Restart Cursor after changing settings
  • Ensure your API key has SDK permissions
  • Check that you have credits in your account
  • Verify the API key hasn’t been deleted
  • The gateway adds minimal latency (~50ms)
  • If significantly slower, check your internet connection
  • Try a different model (GPT-5.2 is faster than GPT-5.2)

Next Steps