AI Providers
Kuse Cowork supports multiple AI providers, giving you flexibility in choosing the right model for your needs.
Supported Providers
Official API Providers
These providers require API keys from the respective companies.
Anthropic Claude
The default and recommended provider for most tasks.
Configuration:
Base URL: https://api.anthropic.com
Auth: x-api-key headerGet your API key at console.anthropic.com
OpenAI
Support for GPT models including the latest GPT-5 series.
GPT-5 Responses API
GPT-5 models use OpenAI's new Responses API format, which is automatically detected and handled.
Get your API key at platform.openai.com
Google Gemini
Google's latest AI models with thinking capabilities.
Special Features:
- Thinking/reasoning mode with
thoughtSignaturesupport - Function calling with thought signatures
Get your API key at ai.google.dev
Minimax
Advanced Chinese language model provider.
Local Inference
Run models locally for privacy and offline use.
Ollama
The easiest way to run local models.
Setup:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.3:latestAvailable Models:
Configuration:
Base URL: http://localhost:11434
Auth: None requiredLocalAI
OpenAI-compatible local inference server.
Base URL: http://localhost:8080
Auth: None requiredvLLM / SGLang / TGI
High-performance inference servers:
Aggregation Services
Access multiple models through a single API.
OpenRouter
Access 100+ models through one API.
Get your API key at openrouter.ai
Groq
Ultra-fast inference with specialized hardware.
Get your API key at console.groq.com
Together AI
Cloud inference for open-source models.
DeepSeek
Chinese AI provider with strong coding models.
SiliconFlow
Cloud inference service with Chinese model focus.
Provider Configuration
Switching Providers
- Open Settings (⚙️)
- Select provider from the dropdown
- Enter API key (if required)
- Select model
- Click "Test Connection"
API Key Storage
API keys are stored in:
~/.kuse-cowork/settings.dbKeys are:
- Stored locally only
- Never sent to third parties
- Associated with specific providers
Per-Provider Keys
You can configure different API keys for each provider:
{
"providerKeys": {
"anthropic": "sk-ant-...",
"openai": "sk-...",
"openrouter": "sk-or-..."
}
}When switching models, the appropriate key is automatically selected.
Custom Providers
OpenAI-Compatible Endpoints
Connect to any OpenAI-compatible API:
- Select "Custom Service" as provider
- Enter base URL
- Configure authentication
- Enter model ID
Example: LM Studio
Base URL: http://localhost:1234/v1
Auth: None
Model: local-modelEnterprise Deployments
For Azure OpenAI or self-hosted deployments:
Base URL: https://your-deployment.openai.azure.com
Auth: Bearer token
Model: your-deployment-nameReasoning Models
Some models have special requirements:
Temperature Restrictions
The following models don't support custom temperature:
- OpenAI:
o1-*,o3-*,gpt-5* - DeepSeek:
deepseek-reasoner
Temperature is automatically ignored for these models.
Extended Thinking
Some models support extended thinking/reasoning:
- Gemini 3: Uses
thoughtSignaturefor function calling - Claude: Uses extended thinking mode
Best Practices
Choosing a Provider
Cost Management
- Use smaller models for simple tasks
- Use local models for development/testing
- Monitor usage through provider dashboards
Performance Tips
- Groq offers fastest cloud inference
- Ollama is fastest for local (if you have GPU)
- Use streaming for better UX
Troubleshooting
Connection test fails
1. Verify API key is correct
2. Check base URL format
3. Ensure network connectivity
4. Check provider status page
Model not found
1. Verify model ID spelling
2. Check if model is available in your plan
3. For Ollama, ensure model is pulled
Rate limit errors
1. Reduce request frequency
2. Upgrade provider plan
3. Use multiple provider keys
Next Steps
- Agent System - Learn how the agent uses models
- Tools - Understand tool execution
- Configuration - Detailed settings