Cline

Cline (formerly Claude Dev) is a VS Code extension that enables AI-assisted coding directly in your editor. This guide shows how to configure Cline to use LLM Gateway.

Prerequisites

Configuration

Step 1: Open Cline Settings

  1. Open VS Code
  2. Click on the Cline icon in the sidebar (or press Ctrl+Shift+P / Cmd+Shift+P and search for “Cline”)
  3. Click the settings gear icon in the Cline panel

Step 2: Configure API Provider

Configure the following settings in the Cline settings panel:

Setting Value
API Provider OpenAI Compatible
Base URL https://<llmgw-deployment-url>/openai
OpenAI Compatible API Key Your LLMGW token
Model ID Model group name (e.g., gpt-5-nano for testing)

Note: The API key is stored locally and only used to make API requests from the extension.

Step 3: Configure Custom Headers (Optional)

If your LLMGW configuration requires project/user identification, click Add Header to add custom headers:

Header Name Value
llmgw-project Your project name
llmgw-user Your username/email

Check with your administrator if these headers are required for your setup.

Step 4: Save Configuration

Close the settings panel. Cline will use these settings for all subsequent requests.

Model Selection

Cline allows you to specify any model available in LLMGW. Check the Available Models page for the current list.

Model Use Case
gpt-5-nano Recommended for testing - Ultra low cost
gpt-4o Good for most coding tasks
gpt-4.1 Best for complex tasks requiring reasoning
anthropic.claude-sonnet-4-5-20250929-v1:0 Claude Sonnet 4.5 (via /aws-bedrock, not /openai)

Switching Models

You can change the model at any time in Cline settings. Consider using:

  • Larger models for complex refactoring, architecture decisions, or debugging
  • Smaller models for simple code generation, documentation, or quick questions

Configuration via settings.json

You can also configure Cline via VS Code’s settings.json:

{
  "cline.apiProvider": "openai-compatible",
  "cline.openAiCompatible.baseUrl": "https://<llmgw-deployment-url>/openai",
  "cline.openAiCompatible.modelId": "gpt-5-nano"
}

Note: For security, it’s recommended to set the API key via the Cline UI rather than in settings.json, as settings files may be committed to version control.

Verification

To verify your configuration:

  1. Open a file in VS Code
  2. Open the Cline panel
  3. Type a simple request like “Explain this code”
  4. Verify you receive a response

Check the LLMGW Admin Portal to confirm requests are being logged under your project.

Troubleshooting

“Invalid API Key” Error

If you see authentication errors:

  1. Verify your token hasn’t expired
  2. Regenerate a new token if needed from the Admin Portal
  3. Ensure you’ve copied the full token string
  4. Verify the endpoint URL is correct

“Model not found” Error

If you get “model not found” errors:

  1. Check the Available Models page for correct model IDs
  2. Ensure correct spelling and casing

Connection Timeout

If you experience connection timeouts:

  1. Check network connectivity to LLMGW
  2. If using a VPN, ensure it allows access to LLMGW endpoints

Slow Responses

If responses are slower than expected:

  1. Consider using a faster model for simple tasks
  2. Check if the model is under high load (contact administrator)
  3. Try a different model from the same group

Spend Limit Exceeded

If requests are rejected due to spend limits:

  1. Write to your administrator/PM to increase limits if needed

Best Practices

  1. Use appropriate models - Select models based on task complexity
  2. Don’t commit API keys - Use the Cline UI to set sensitive values
  3. Review generated code - Always review AI-generated code before accepting