Skip to main content
The Secure AI Gateway provides a unified interface to multiple top-tier AI models. You can specify the model using the model parameter in your API requests.
Model IDProviderDescription
openai/gpt-4o-miniOpenAIFast and cost-effective for simple tasks
openai/gpt-4oOpenAIMost capable OpenAI model
openai/gpt-5.2OpenAIThe best model for coding and agentic tasks
google/gemini-2.5-flashGoogleFastest and most cost-efficient multimodal model
google/gemini-2.5-proGoogleMost capable Google model with advanced reasoning
xai/grok-4-1-fast-reasoningxAI / ReasoningFast reasoning and problem solving
deepseek/deepseek-reasonerDeepSeekAdvanced reasoning model
deepseek/deepseek-chatDeepSeekFast and efficient chat model (V3)
openai/o1OpenAIAdvanced reasoning and problem-solving
minimax/MiniMax-M2.1MiniMaxReasoning model with tool support

Model Selection

To use a specific model, pass its ID in the request body:
{
  "model": "deepseek/deepseek-reasoner",
  "messages": [...]
}

Capabilities Mapping

FeatureSupport
Text GenerationAll models
Visual Input (Vision)openai/gpt-4o, openai/gpt-5.2, google/gemini-2.5-pro, xai/grok-4-1-fast-reasoning
Web SearchAll models (via Gateway tool)
StreamingAll models

API Endpoints (OpenAI Compatible)

The Secure AI Gateway mimics the OpenAI API structure, meaning standard clients can fetch available models dynamically.

GET /api/v1/models

Retrieves a list of all currently supported models in the standard OpenAI response format. Example Request:
curl http://localhost:3000/api/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY_HERE"

GET /api/v1/models/[model]

Retrieves information about a specific model. Model IDs with slashes (e.g. openai/gpt-4o) are fully supported. Example Request:
curl http://localhost:3000/api/v1/models/openai/gpt-4o \
  -H "Authorization: Bearer YOUR_API_KEY_HERE"