Documentation Index
Fetch the complete documentation index at: https://docs.factory.ai/llms.txt
Use this file to discover all available pages before exploring further.
Anthropic
| Model | Multiplier | Best for |
|---|
| Claude Opus 4.7 | 1x (2x after 4/30) | Newest flagship, discounted during launch window |
| Claude Opus 4.6 | 2× | Previous flagship, Max reasoning |
| Claude Opus 4.6 Fast | 12× | Opus 4.6 tuned for faster responses |
| Claude Sonnet 4.6 | 1.2× | Max reasoning at the Sonnet price point |
| Claude Opus 4.5 | 2× | Complex reasoning, architecture |
| Claude Sonnet 4.5 | 1.2× | Balanced quality/cost |
| Claude Haiku 4.5 | 0.4× | Quick edits, routine work |
OpenAI
| Model | Multiplier | Best for |
|---|
| GPT-5.5 | 2× | Latest GPT flagship with Extra High reasoning |
| GPT-5.5 Fast | 5× | GPT-5.5 on a priority service tier |
| GPT-5.5 Pro | 12× | Higher-capability GPT-5.5 variant for research-heavy tasks |
| GPT-5.4 | 1× | Large-context GPT model with Extra High reasoning |
| GPT-5.4 Fast | 2× | Faster GPT-5.4 responses |
| GPT-5.4 Mini | 0.3× | Cost-sensitive GPT work |
| GPT-5.3-Codex | 0.7× | Advanced coding with Extra High reasoning |
| GPT-5.3-Codex Fast | 1.4× | Faster GPT-5.3-Codex responses |
| GPT-5.2 | 0.7× | General GPT work with Extra High reasoning |
| GPT-5.2-Codex | 0.7× | Advanced coding with Extra High reasoning |
Google
| Model | Multiplier | Best for |
|---|
| Gemini 3.1 Pro | 0.8× | Research, analysis with newer Gemini generation |
| Gemini 3 Pro | 0.8× | Research, analysis |
| Gemini 3 Flash | 0.2× | Fast, cheap for high-volume tasks |
Droid Core (Open Models)
| Model | Multiplier | Best for |
|---|
| GLM-5.1 | 0.55× | Newer open-source GLM model when you want stronger quality in Droid Core |
| Kimi K2.6 | 0.4× | Cost-sensitive work, supports images, optional High reasoning |
| Kimi K2.5 | 0.25× | Cost-sensitive work, supports images |
| MiniMax M2.7 | 0.12× | Cheapest option with reasoning support |
Custom models
Configure custom models through Bring Your Own Key (BYOK). Custom models use the custom:<alias> model ID format. Context windows, reasoning support, and multipliers depend on the configured provider and model.