Browse topics
Models & Providers
Supported AI providers and models for agents.
Models & Providers
RightPlace agents support multiple AI providers. You bring your own API keys — configure them in Settings > AI.
Supported Providers
| Provider | Example Models | Endpoint |
|---|---|---|
| Anthropic | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 | https://api.anthropic.com |
| OpenAI | gpt-4o, gpt-4-turbo, o1, o3-mini | https://api.openai.com |
gemini-2.0-flash, gemini-pro | https://generativelanguage.googleapis.com | |
| Mistral | mistral-large-latest, codestral-latest | https://api.mistral.ai |
| xAI | grok-2, grok-3 | https://api.x.ai |
| DeepSeek | deepseek-chat, deepseek-reasoner | https://api.deepseek.com |
| Groq | llama-3.3-70b-versatile | https://api.groq.com/openai |
| Together | meta-llama/Llama-3-70b-chat-hf | https://api.together.xyz |
| Cohere | command-r-plus | https://api.cohere.ai |
| Perplexity | llama-3.1-sonar-large-128k-online | https://api.perplexity.ai |
Router APIs
Route requests through a proxy that selects models dynamically:
| Router | Description |
|---|---|
| OpenRouter | Access 100+ models through one API key |
| LiteLLM | Self-hosted proxy for model routing |
| Portkey | AI gateway with caching and fallbacks |
Router APIs use the OpenAI-compatible format, so any provider that supports the /v1/chat/completions endpoint works.
Setting Up an AI Resource
- Go to Settings > AI
- Click Add AI Resource
- Choose the provider type
- Enter your API key and endpoint
- The resource appears in the agent model selector
Model Selection in agent.json
{
"model": {
"aiResourceId": "uuid-of-ai-resource",
"modelId": "claude-sonnet-4-6"
}
}
aiResourceId— the UUID of the AI resource you configured in SettingsmodelId— the model identifier (provider-specific)
Local Models
Use the CLI provider type to run local models via Ollama, llama.cpp, or any command-line LLM:
- Add an AI resource with type Local CLI
- Set the command template (e.g.,
ollama run llama3 "{prompt}") - Select it in your agent config
Note: Local CLI models don’t return token usage information.