Developer

Models & Providers

Supported AI providers and models for agents.

Models & Providers

RightPlace agents support multiple AI providers. You bring your own API keys — configure them in Settings > AI.

Supported Providers

ProviderExample ModelsEndpoint
Anthropicclaude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5https://api.anthropic.com
OpenAIgpt-4o, gpt-4-turbo, o1, o3-minihttps://api.openai.com
Googlegemini-2.0-flash, gemini-prohttps://generativelanguage.googleapis.com
Mistralmistral-large-latest, codestral-latesthttps://api.mistral.ai
xAIgrok-2, grok-3https://api.x.ai
DeepSeekdeepseek-chat, deepseek-reasonerhttps://api.deepseek.com
Groqllama-3.3-70b-versatilehttps://api.groq.com/openai
Togethermeta-llama/Llama-3-70b-chat-hfhttps://api.together.xyz
Coherecommand-r-plushttps://api.cohere.ai
Perplexityllama-3.1-sonar-large-128k-onlinehttps://api.perplexity.ai

Router APIs

Route requests through a proxy that selects models dynamically:

RouterDescription
OpenRouterAccess 100+ models through one API key
LiteLLMSelf-hosted proxy for model routing
PortkeyAI gateway with caching and fallbacks

Router APIs use the OpenAI-compatible format, so any provider that supports the /v1/chat/completions endpoint works.

Setting Up an AI Resource

  1. Go to Settings > AI
  2. Click Add AI Resource
  3. Choose the provider type
  4. Enter your API key and endpoint
  5. The resource appears in the agent model selector

Model Selection in agent.json

{
  "model": {
    "aiResourceId": "uuid-of-ai-resource",
    "modelId": "claude-sonnet-4-6"
  }
}
  • aiResourceId — the UUID of the AI resource you configured in Settings
  • modelId — the model identifier (provider-specific)

Local Models

Use the CLI provider type to run local models via Ollama, llama.cpp, or any command-line LLM:

  1. Add an AI resource with type Local CLI
  2. Set the command template (e.g., ollama run llama3 "{prompt}")
  3. Select it in your agent config

Note: Local CLI models don’t return token usage information.