Claude Code CLI
Claude Code is Anthropic’s official agentic coding tool that runs in your terminal. This guide shows how to configure Claude Code to use LLM Gateway.
Prerequisites
- Claude Code installed (
npm install -g @anthropic-ai/claude-code) - LLMGW API token (see User Tokens)
Configuration
Claude Code connects to LLMGW via the AWS Bedrock endpoint. Add the following environment variables to your shell configuration file (.bashrc, .zshrc, or similar):
After adding these variables, reload your shell:
Environment Variables Explained
| Variable | Description |
|---|---|
ANTHROPIC_DEFAULT_HAIKU_MODEL |
Model ID for fast/cheap operations |
ANTHROPIC_DEFAULT_SONNET_MODEL |
Model ID for complex tasks |
ANTHROPIC_BEDROCK_BASE_URL |
LLMGW AWS Bedrock endpoint |
ANTHROPIC_AUTH_TOKEN |
Your LLMGW API token |
CLAUDE_CODE_USE_BEDROCK |
Enable Bedrock mode |
CLAUDE_CODE_SKIP_BEDROCK_AUTH |
Skip AWS auth (LLMGW handles authentication) |
Model Selection
Claude Code uses two models:
- Haiku model - Used for fast, simple operations (file summaries, quick questions)
- Sonnet model - Used for complex coding tasks (code generation, refactoring)
Cost Saving Tip: For testing purposes, set both models to Haiku to minimize costs. You can upgrade to Sonnet later when you need better quality for complex tasks.
Available Models
Check the Available Models page for the current list. Common options:
| Model | Use in Variable |
|---|---|
| Claude Haiku 4.5 (Recommended for testing) | anthropic.claude-haiku-4-5-20251001-v1:0-native |
| Claude Sonnet 4.5 (For production/complex tasks) | anthropic.claude-sonnet-4-5-20250929-v1:0-native |
Verification
To verify your configuration is working:
-
Start Claude Code:
-
Ask a simple question to test connectivity:
-
Check the LLMGW Admin Portal to confirm requests are being logged.
Troubleshooting
“Invalid API Key” Error
If you see authentication errors:
- Verify your token hasn’t expired
- Regenerate a new token if needed from the Admin Portal
- Ensure you’ve copied the full token string
- Verify the endpoint URL is correct
“Model not found” Error
If you get “model not found” errors:
- Check the Available Models page for correct model IDs
- Ensure model IDs match exactly (including the
-nativesuffix)
Connection Timeout
If you experience connection timeouts:
- Check network connectivity to LLMGW
- If using a VPN, ensure it allows access to LLMGW endpoints
Slow Responses
If responses are slower than expected:
- Consider using a faster model for simple tasks
- Check if the model is under high load (contact administrator)
- Try a different model from the same group
Spend Limit Exceeded
If you see an error like:
This means you’ve reached your daily/weekly/monthly spend limit. To resolve:
- Wait for the limit to reset (check with your administrator for reset period)
- Contact your administrator/PM to increase limits if needed
Best Practices
- Use environment variables for sensitive configuration like API tokens
- Verify connectivity - Check the Admin Portal to confirm requests are being logged
- Choose appropriate models - Haiku for simple tasks saves costs
- Reload shell after changing environment variables