Modeldex
  • Models
  • Providers
  • Benchmarks
  • MCP
  • Compare
  • Guides

Product

  • Models
  • Providers
  • Benchmarks
  • Compare
  • Prompts
  • Find a model
  • Trending
  • Collections
  • News
  • Changelog

Learn

  • New to AI?
  • Best AI by use case
  • Blog
  • Pricing
  • About
  • Support

Legal

  • Privacy
  • Terms
  • Cookies

Connect

  • GitHub
  • X / Twitter
  • Contact

© 2026 Modeldex — the AI model registry.

Press ? for keyboard shortcuts.

Home/Models

Models

244 models tracked across 12 providers.

Intelligence vs Blended price

Bubble size = context window. Up and to the left is better value.

Loading chart…

Showing 14 of 244 models (only those with an Intelligence Index and the selected axis).

  • QwQ 32B

    Alibaba
    Active

    QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks,...

    Budget
    Context: 131KModalities: text
  • Qwen 3 14B

    Alibaba
    Active

    14B compact Qwen 3 model for efficient local deployment.

    AgenticOpen source
    Context: 128KModalities: text
  • Qwen 3 235B

    Alibaba
    Active

Alibaba's frontier open-weight MoE model with hybrid thinking.

AgenticOpen source
Context: 128KModalities: text
  • Qwen 3 32B

    Alibaba
    Active

    32B Qwen 3 model offering strong reasoning at mid-size cost.

    AgenticOpen source
    Context: 128KModalities: text
  • Qwen 3 72B

    Alibaba
    Active

    72B dense open-weight model with hybrid thinking from Alibaba.

    AgenticOpen source
    Context: 128KModalities: text
  • Qwen Plus 0728

    Alibaba
    Active

    Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.

    Long contextBudget
    Context: 1MModalities: text
  • Qwen Plus 0728 (thinking)

    Alibaba
    Active

    Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.

    Long contextBudget
    Context: 1MModalities: text
  • Qwen VL Max

    Alibaba
    Active

    Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.

    Vision
    Context: 131KModalities: text, image
  • Qwen VL Plus

    Alibaba
    Active

    Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixels and extreme aspect ratios for...

    VisionBudget
    Context: 131KModalities: text, image
  • Qwen-Max

    Alibaba
    Active

    Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion...

    Context: 33KModalities: text
  • Qwen-Plus

    Alibaba
    Active

    Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.

    Long contextBudget
    Context: 1MModalities: text
  • Qwen-Turbo

    Alibaba
    Active

    Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.

    Budget
    Context: 131KModalities: text
  • Qwen2.5 72B Instruct

    Alibaba
    Active

    Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and...

    Budget
    Context: 33KModalities: text
  • Qwen2.5 7B Instruct

    Alibaba
    Active

    Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and...

    Budget
    Context: 33KModalities: text
  • Qwen2.5 Coder 32B Instruct

    Alibaba
    Active

    Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning**...

    Context: 33KModalities: text
  • Qwen2.5 VL 32B Instruct

    Alibaba
    Active

    Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual...

    VisionBudget
    Context: 128KModalities: text, image
  • Qwen2.5 VL 72B Instruct

    Alibaba
    Active

    Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and insects. It is also highly capable of analyzing texts, charts, icons, graphics, and layouts within images.

    VisionBudget
    Context: 32KModalities: text, image
  • Qwen3 14B

    Alibaba
    Active

    Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for...

    Budget
    Context: 41KModalities: text
  • Qwen3 235B A22B

    Alibaba
    Active

    Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and...

    Context: 131KModalities: text
  • Qwen3 235B A22B Instruct 2507

    Alibaba
    Active

    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following,...

    Long contextBudget
    Context: 262KModalities: text
  • Qwen3 235B A22B Thinking 2507

    Alibaba
    Active

    Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...

    Long contextBudget
    Context: 262KModalities: text
  • Qwen3 30B A3B

    Alibaba
    Active

    Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique...

    Budget
    Context: 41KModalities: text
  • Qwen3 30B A3B Instruct 2507

    Alibaba
    Active

    Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is designed for high-quality instruction following, multilingual understanding, and...

    Long contextBudget
    Context: 262KModalities: text
  • Qwen3 30B A3B Thinking 2507

    Alibaba
    Active

    Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated...

    Budget
    Context: 131KModalities: text
  • Qwen3 32B

    Alibaba
    Active

    Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for...

    Budget
    Context: 41KModalities: text
  • Qwen3 8B

    Alibaba
    Active

    Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math,...

    Budget
    Context: 41KModalities: text
  • Qwen3 Coder 30B A3B Instruct

    Alibaba
    Active

    Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the...

    Budget
    Context: 160KModalities: text
  • Qwen3 Coder 480B A35B

    Alibaba
    Active

    Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over...

    Long context
    Context: 262KModalities: text
  • Qwen3 Coder Flash

    Alibaba
    Active

    Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling...

    Long contextBudget
    Context: 1MModalities: text
  • Qwen3 Coder Next

    Alibaba
    Active

    Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per...

    Long contextBudget
    Context: 262KModalities: text
  • Qwen3 Coder Plus

    Alibaba
    Active

    Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and...

    Long context
    Context: 1MModalities: text
  • Qwen3 Max

    Alibaba
    Active

    Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It...

    Long context
    Context: 262KModalities: text
  • Qwen3 Max Thinking

    Alibaba
    Active

    Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it...

    Long context
    Context: 262KModalities: text
  • Qwen3 Next 80B A3B Instruct

    Alibaba
    Active

    Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code generation, knowledge QA, and multilingual...

    Long context
    Context: 262KModalities: text
  • Qwen3 Next 80B A3B Thinking

    Alibaba
    Active

    Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code synthesis/debugging, logic, and agentic...

    Budget
    Context: 131KModalities: text
  • Qwen3 VL 235B A22B Instruct

    Alibaba
    Active

    Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language use (VQA, document parsing, chart/table...

    VisionLong contextBudget
    Context: 262KModalities: text, image
  • Qwen3 VL 235B A22B Thinking

    Alibaba
    Active

    Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math....

    Vision
    Context: 131KModalities: text, image
  • Qwen3 VL 30B A3B Instruct

    Alibaba
    Active

    Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general multimodal tasks. It excels in perception...

    VisionBudget
    Context: 131KModalities: text, image
  • Qwen3 VL 30B A3B Thinking

    Alibaba
    Active

    Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex tasks. It excels...

    Vision
    Context: 131KModalities: text, image
  • Qwen3 VL 32B Instruct

    Alibaba
    Active

    Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters, it combines deep visual perception with advanced text...

    VisionBudget
    Context: 131KModalities: text, image
  • Qwen3 VL 8B Instruct

    Alibaba
    Active

    Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon...

    VisionBudget
    Context: 131KModalities: image, text
  • Qwen3 VL 8B Thinking

    Alibaba
    Active

    Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and...

    Vision
    Context: 131KModalities: image, text
  • Qwen3.5 397B A17B

    Alibaba
    Active

    The Qwen3.5 series 397B-A17B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. It delivers...

    VisionLong context
    Context: 262KModalities: text, image, video
  • Qwen3.5 Plus 2026-02-15

    Alibaba
    Active

    The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture that integrates linear attention mechanisms with sparse mixture-of-experts models, achieving higher inference efficiency. In a variety of...

    VisionLong context
    Context: 1MModalities: text, image, video
  • Qwen3.5-122B-A10B

    Alibaba
    Active

    The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of...

    VisionLong context
    Context: 262KModalities: text, image, video
  • Qwen3.5-27B

    Alibaba
    Active

    The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechanism, delivering fast response times while balancing inference speed and performance. Its overall capabilities are comparable to those of...

    53.6Basic
    VisionLong context
    Context: 262KModalities: text, image, video
  • Qwen3.5-35B-A3B

    Alibaba
    Active

    The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a sparse mixture-of-experts model, achieving higher inference efficiency. Its overall...

    VisionLong context
    Context: 262KModalities: text, image, video
  • Qwen3.5-9B

    Alibaba
    Active

    Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an efficient 9B-parameter architecture. It uses a unified vision-language design...

    VisionLong contextBudget
    Context: 262KModalities: text, image, video
  • Qwen3.5-Flash

    Alibaba
    Active

    The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the...

    VisionLong contextBudget
    Context: 1MModalities: text, image, video
  • Qwen3.6 Plus

    Alibaba
    Active

    Qwen 3.6 Plus builds on a hybrid architecture that combines efficient linear attention with sparse mixture-of-experts routing, enabling strong scalability and high-performance inference. Compared to the 3.5 series, it delivers...

    VisionLong context
    Context: 1MModalities: text, image, video
  • Tongyi DeepResearch 30B A3B

    Alibaba
    Active

    Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks...

    Budget
    Context: 131KModalities: text
  • Amazon Nova 2 Lite

    Amazon (AWS)
    Active

    Cost-efficient Nova 2 model with 1M context.

    VisionAgenticLong context
    Context: 1MModalities: text, image
  • Amazon Nova 2 Pro

    Amazon (AWS)
    Active

    Next-generation Nova flagship with 1M context from Amazon Bedrock.

    VisionAgenticLong context
    Context: 1MModalities: text, image, video
  • Amazon Nova Lite

    Amazon (AWS)
    Active

    Low-cost multimodal model from Amazon for high-throughput workloads.

    VisionAgenticLong contextBudget
    Context: 300KModalities: text, image, video
  • Amazon Nova Micro

    Amazon (AWS)
    Active

    Ultra-low-cost text-only model from Amazon.

    Budget
    Context: 128KModalities: text
  • Amazon Nova Pro

    Amazon (AWS)
    Active

    Amazon's most capable multimodal model, available through Amazon Bedrock.

    VisionAgenticLong context
    Context: 300KModalities: text, image, video
  • Nova 2 Lite

    Amazon (AWS)
    Active

    Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text. Nova 2 Lite demonstrates standout capabilities in processing...

    VisionLong context
    Context: 1MModalities: text, image, video, file
  • Nova Lite 1.0

    Amazon (AWS)
    Active

    Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite...

    VisionLong contextBudget
    Context: 300KModalities: text, image
  • Nova Micro 1.0

    Amazon (AWS)
    Active

    Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length...

    Budget
    Context: 128KModalities: text
  • Nova Premier 1.0

    Amazon (AWS)
    Active

    Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models.

    VisionLong context
    Context: 1MModalities: text, image
  • Nova Pro 1.0

    Amazon (AWS)
    Active

    Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December...

    VisionLong context
    Context: 300KModalities: text, image
  • Claude 3 Haiku

    Anthropic
    Active

    Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku) #multimodal

    VisionLong context
    Context: 200KModalities: text, image
  • Claude 3 Opus

    Anthropic
    Deprecated

    Previous Anthropic flagship, now superseded by Claude Opus 4.

    VisionAgenticLong context
    Context: 200KModalities: text, image
  • Claude 3.5 Haiku

    Anthropic
    Active

    Fast, low-cost model with stronger capabilities than Claude 3 Haiku.

    VisionAgenticLong context
    Context: 200KModalities: text, image
  • Claude 3.5 Sonnet

    Anthropic
    Active

    Mid-2024 release setting a new standard for coding and reasoning at mid-tier price.

    VisionAgenticLong context
    Context: 200KModalities: text, image
  • Claude 3.7 Sonnet

    Anthropic
    Active

    Anthropic Claude 3.7 Sonnet.

    VisionAgenticLong contextCode
    Context: 200KModalities: text, image
  • Claude 3.7 Sonnet (thinking)

    Anthropic
    Active

    Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rapid responses and...

    VisionLong context
    Context: 200KModalities: text, image, file
  • Claude Haiku 4.5

    Anthropic
    Active

    Fast, low-cost Claude model for latency-sensitive workloads.

    31.1Limited
    VisionAgenticLong context
    Context: 200KModalities: text, image
  • Claude Opus 4

    Anthropic
    Active

    Anthropic's most capable model for complex reasoning and long-context work.

    75.7Strong
    VisionAgenticLong contextReasoningCode
    Context: 200KModalities: text, image
  • Claude Opus 4.1

    Anthropic
    Active

    Most capable Claude Opus model.

    VisionAgentic
    Modalities: text, image
  • Claude Opus 4.5

    Anthropic
    Active

    Anthropic Claude Opus 4.5.

    VisionAgenticLong contextCode
    Context: 200KModalities: text, image
  • Claude Opus 4.6

    Anthropic
    Active

    Anthropic Claude Opus 4.6.

    VisionAgentic
    Modalities: text, image
  • Claude Opus 4.6 (Fast)

    Anthropic
    Active

    Fast-mode variant of [Opus 4.6](/anthropic/claude-opus-4.6) - identical capabilities with higher output speed at premium 6x pricing. Learn more in Anthropic's docs: https://platform.claude.com/docs/en/build-with-claude/fast-mode

    VisionLong context
    Context: 1MModalities: text, image
  • Claude Opus 4.7

    Anthropic
    Active

    Anthropic Claude Opus 4.7.

    57.2Competent
    VisionAgenticLong context
    Context: 1MModalities: text, image
  • Claude Sonnet 4

    Anthropic
    Active

    Balanced mid-tier Claude model with strong general capability and price.

    66.2Competent
    VisionAgenticLong context
    Context: 200KModalities: text, image
  • Claude Sonnet 4.5

    Anthropic
    Active

    Balanced performance and speed.

    VisionAgenticLong contextCode
    Context: 200KModalities: text, image
  • Claude Sonnet 4.6

    Anthropic
    Active

    Anthropic Claude Sonnet 4.6.

    44.8Basic
    VisionAgenticLong contextCode
    Context: 1MModalities: text, image
  • Command A

    Cohere
    Active

    Cohere's most capable model with 256K context, optimized for enterprise agentic tasks.

    AgenticLong context
    Context: 256KModalities: text
  • Command R

    Cohere
    Active

    Efficient mid-size model from Cohere for RAG and agentic tasks.

    AgenticBudget
    Context: 128KModalities: text
  • Command R (08-2024)

    Cohere
    Active

    command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and...

    AgenticBudget
    Context: 128KModalities: text
  • Command R+

    Cohere
    Active

    Cohere's flagship model optimized for enterprise RAG and complex tasks.

    Agentic
    Context: 128KModalities: text
  • Command R+ (08-2024)

    Cohere
    Active

    command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint...

    Agentic
    Context: 128KModalities: text
  • Command R7B (12-2024)

    Cohere
    Active

    Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning...

    AgenticBudget
    Context: 128KModalities: text
  • DeepSeek R1

    DeepSeek
    Active

    Open-weight reasoning model matching o1 performance, fully open-source.

    76.9Strong
    MathOpen sourceReasoning
    Context: 128KModalities: text
  • DeepSeek V3

    DeepSeek
    Active

    DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations...

    AgenticBudget
    Context: 131KModalities: text
  • DeepSeek V3

    DeepSeek
    Active

    Open-weight frontier model competitive with GPT-4o and Claude Sonnet at fraction of training cost.

    84.0Frontier
    MathAgenticFrontierOpen sourceCode
    Context: 128KModalities: text
  • DeepSeek V3 (2506)

    DeepSeek
    Active

    Latest version of DeepSeek V3.

    AgenticOpen source
    Context: 131KModalities: text
  • DeepSeek V3 0324

    DeepSeek
    Active

    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well...

    Budget
    Context: 164KModalities: text
  • DeepSeek V3.1

    DeepSeek
    Active

    DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase long-context...

    Budget
    Context: 33KModalities: text
  • DeepSeek V3.1 Terminus

    DeepSeek
    Active

    DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's...

    Budget
    Context: 164KModalities: text
  • DeepSeek V3.2 Exp

    DeepSeek
    Active

    DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism...

    Budget
    Context: 164KModalities: text
  • DeepSeek V3.2 Speciale

    DeepSeek
    Active

    DeepSeek-V3.2-Speciale is a high-compute variant of DeepSeek-V3.2 optimized for maximum reasoning and agentic performance. It builds on DeepSeek Sparse Attention (DSA) for efficient long-context processing, then scales post-training reinforcement learning...

    Context: 164KModalities: text
  • R1 0528

    DeepSeek
    Active

    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active...

    Context: 164KModalities: text
  • R1 Distill Llama 70B

    DeepSeek
    Active

    DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across...

    Budget
    Context: 131KModalities: text
  • R1 Distill Qwen 32B

    DeepSeek
    Active

    DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new...

    Budget
    Context: 33KModalities: text
  • Gemini 1.5 Flash

    Google
    Deprecated

    Fast multimodal model from the Gemini 1.5 generation.

    VisionAgenticLong contextBudget
    Context: 1MModalities: text, image, audio, video
  • Gemini 1.5 Pro

    Google
    Deprecated

    Previous Google flagship with 1M context window, superseded by Gemini 2.

    VisionAgenticLong context
    Context: 1MModalities: text, image, audio, video
  • Gemini 2 Flash

    Google
    Active

    Low-latency, low-cost multimodal model with 1M context.

    VisionAgenticLong contextBudget
    Context: 1.0MModalities: text, image, audio, video
  • Gemini 2 Pro

    Google
    Active

    Google's flagship multimodal model with very long context.

    82.8Frontier
    VisionMathAgenticLong contextFrontierCode
    Context: 2MModalities: text, image, audio, video
  • Gemini 2.0 Flash

    Google
    Active

    Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It...

    VisionLong contextBudget
    Context: 1.0MModalities: text, image, file, audio, video
  • Gemini 2.0 Flash Lite

    Google
    Active

    Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5),...

    VisionLong contextBudget
    Context: 1.0MModalities: text, image, file, audio, video
  • Gemini 2.5 Flash

    Google
    Active

    Fast and efficient multimodal model.

    VisionAgenticLong context
    Context: 1.0MModalities: text, image, audio, video
  • Gemini 2.5 Flash Lite

    Google
    Active

    Ultra-fast lightweight variant.

    VisionAgenticLong context
    Context: 1.0MModalities: text, image
  • Gemini 2.5 Pro

    Google
    Active

    Google Gemini 2.5 Pro — state-of-the-art thinking model.

    VisionAgenticLong context
    Context: 1.0MModalities: text, image, audio, video
  • Gemma 2 27B

    Google
    Active

    Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini). Gemma models are well-suited for a variety of...

    Budget
    Context: 8KModalities: text
  • Gemma 2 27B

    Google
    Active

    Open-weights 27B model from Google with state-of-the-art performance at its size.

    Open source
    Context: 8KModalities: text
  • Gemma 2 9B

    Google
    Active

    Open-weights 9B model from Google, competitive with much larger models.

    Open source
    Context: 8KModalities: text
  • Gemma 3 12B

    Google
    Active

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...

    VisionBudget
    Context: 131KModalities: text, image
  • Gemma 3 27B

    Google
    Active

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...

    VisionBudget
    Context: 131KModalities: text, image
  • Gemma 3 4B

    Google
    Active

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...

    VisionBudget
    Context: 131KModalities: text, image
  • Gemma 3n 4B

    Google
    Active

    Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks...

    Budget
    Context: 33KModalities: text
  • Gemma 4 26B A4B

    Google
    Active

    Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...

    VisionLong contextBudget
    Context: 262KModalities: image, text, video
  • Gemma 4 31B

    Google
    Active

    Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and image input with text output. Features a 256K token context window, configurable thinking/reasoning mode, native function...

    45.1Basic
    VisionLong contextBudget
    Context: 262KModalities: image, text, video
  • Nano Banana (Gemini 2.5 Flash Image)

    Google
    Active

    Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of the art image generation model with contextual understanding. It is capable of image generation,...

    Vision
    Context: 33KModalities: image, text
  • Llama 3 70B

    Meta
    Active

    Open-weights 70B model for high-quality general use.

    MathAgenticOpen source
    Context: 128KModalities: text
  • Llama 3 70B Instruct

    Meta
    Active

    Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...

    Budget
    Context: 8KModalities: text
  • Llama 3 8B

    Meta
    Active

    Smaller open-weights Llama for on-device and cost-sensitive use.

    AgenticOpen source
    Context: 128KModalities: text
  • Llama 3 8B Instruct

    Meta
    Active

    Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...

    Budget
    Context: 8KModalities: text
  • Llama 3.1 405B

    Meta
    Active

    Meta's largest open-weights model, competitive with frontier closed models.

    AgenticOpen sourceCode
    Context: 128KModalities: text
  • Llama 3.1 70B

    Meta
    Active

    Updated 70B open-weights model with 128k context and improved tool calling.

    AgenticOpen source
    Context: 128KModalities: text
  • Llama 3.1 70B Instruct

    Meta
    Active

    Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong...

    Budget
    Context: 131KModalities: text
  • Llama 3.1 8B Instruct

    Meta
    Active

    Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to...

    FastBudget
    Context: 16KModalities: text
  • Llama 3.2 11B

    Meta
    Active

    Multimodal 11B model from Meta supporting text and image inputs.

    VisionAgenticOpen source
    Context: 128KModalities: text, image
  • Llama 3.2 1B Instruct

    Meta
    Active

    Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate...

    Budget
    Context: 60KModalities: text
  • Llama 3.2 3B

    Meta
    Active

    Small on-device model for edge and mobile deployments.

    AgenticOpen source
    Context: 128KModalities: text
  • Llama 3.2 3B Instruct

    Meta
    Active

    Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it...

    Budget
    Context: 80KModalities: text
  • Llama 3.3 70B

    Meta
    Active

    Meta Llama 3.3 70B — improved instruction-following.

    AgenticOpen source
    Modalities: text
  • Llama 3.3 70B Instruct

    Meta
    Active

    The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model...

    FastBudget
    Context: 131KModalities: text
  • Llama 4 Maverick

    Meta
    Active

    High-performance multimodal model.

    VisionAgenticOpen source
    Modalities: text, image
  • Llama 4 Scout

    Meta
    Active

    Efficient multimodal model with 17B active parameters.

    VisionAgenticOpen source
    Modalities: text, image
  • Phi-3.5 Mini

    Microsoft
    Active

    3.8B instruction-following model targeting mobile and edge deployment.

    Open source
    Context: 128KModalities: text
  • Phi-4

    Microsoft
    Active

    14B small language model from Microsoft Research with state-of-the-art STEM reasoning.

    86.5Frontier
    MathFrontierOpen source
    Context: 16KModalities: text
  • Phi-4 Mini

    Microsoft
    Active

    Compact yet capable small language model.

    Agentic
    Modalities: text
  • Phi-4 Reasoning

    Microsoft
    Active

    14B reasoning-specialized Phi model with extended thinking.

    Open source
    Context: 32KModalities: text
  • Phi-4 Reasoning Vision

    Microsoft
    Active

    15B multimodal reasoning model with image understanding.

    VisionOpen source
    Context: 32KModalities: text, image
  • WizardLM-2 8x22B

    Microsoft
    Active

    WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models. It is...

    Budget
    Context: 66KModalities: text
  • Codestral

    Mistral AI
    Active

    Mistral's code-specialized model with long context.

    Context: 32KModalities: text
  • Codestral 2508

    Mistral AI
    Active

    Specialized code generation model.

    AgenticLong context
    Context: 256KModalities: text
  • Devstral

    Mistral AI
    Active

    Agentic coding model for software development.

    AgenticLong context
    Context: 256KModalities: text
  • Devstral 2 2512

    Mistral AI
    Active

    Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window. Devstral 2 supports exploring...

    Long context
    Context: 262KModalities: text
  • Devstral Medium

    Mistral AI
    Active

    Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves...

    Context: 131KModalities: text
  • Devstral Small 1.1

    Mistral AI
    Active

    Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and...

    Budget
    Context: 131KModalities: text
  • Ministral 3 14B 2512

    Mistral AI
    Active

    The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language...

    VisionLong contextBudget
    Context: 262KModalities: text, image
  • Ministral 3 3B 2512

    Mistral AI
    Active

    The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

    VisionBudget
    Context: 131KModalities: text, image
  • Ministral 3 8B 2512

    Mistral AI
    Active

    A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

    VisionLong contextBudget
    Context: 262KModalities: text, image
  • Mistral 7B

    Mistral AI
    Active

    Compact open-weights model that outperforms Llama 2 13B on many benchmarks.

    Open source
    Context: 32KModalities: text
  • Mistral 7B Instruct v0.1

    Mistral AI
    Active

    A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.

    Budget
    Context: 3KModalities: text
  • Mistral Large

    Mistral AI
    Active

    Mistral's flagship commercial model with tool calling and structured outputs.

    VisionAgenticLong context
    Context: 262KModalities: text, image
  • Mistral Large 2407

    Mistral AI
    Active

    This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/)....

    Context: 131KModalities: text
  • Mistral Large 2411

    Mistral AI
    Active

    Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable...

    Context: 131KModalities: text
  • Mistral Large 3

    Mistral AI
    Active

    Top-tier reasoning and coding model.

    VisionAgenticLong context
    Context: 262KModalities: text, image
  • Mistral Large 3 2512

    Mistral AI
    Active

    Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.

    VisionLong context
    Context: 262KModalities: text, image
  • Mistral Medium 3

    Mistral AI
    Active

    Balanced performance and cost.

    Agentic
    Modalities: text
  • Mistral Medium 3.1

    Mistral AI
    Active

    Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances...

    Vision
    Context: 131KModalities: text, image
  • Mistral NeMo

    Mistral AI
    Active

    12B open-weights model built with NVIDIA, with 128k context.

    AgenticOpen source
    Context: 128KModalities: text
  • Mistral Small

    Mistral AI
    Active

    Efficient open-weights mid-sized model from Mistral.

    AgenticOpen source
    Context: 32KModalities: text
  • Mistral Small 3

    Mistral AI
    Active

    Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed...

    Budget
    Context: 33KModalities: text
  • Mistral Small 3.1 24B

    Mistral AI
    Active

    Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and...

    VisionBudget
    Context: 128KModalities: text, image
  • Mistral Small 3.2

    Mistral AI
    Active

    Fast and affordable.

    Agentic
    Modalities: text
  • Mistral Small 3.2 24B

    Mistral AI
    Active

    Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on...

    VisionBudget
    Context: 128KModalities: image, text
  • Mistral Small 4

    Mistral AI
    Active

    Mistral Small 4 is the next major release in the Mistral Small family, unifying the capabilities of several flagship Mistral models into a single system. It combines strong reasoning from...

    VisionLong contextBudget
    Context: 262KModalities: text, image
  • Mistral Small Creative

    Mistral AI
    Active

    Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversational agents.

    Budget
    Context: 33KModalities: text
  • Mixtral 8x22B Instruct

    Mistral AI
    Active

    Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding,...

    Context: 66KModalities: text
  • Mixtral 8x7B

    Mistral AI
    Active

    Open-weights mixture-of-experts model with GPT-3.5 class performance.

    AgenticOpen source
    Context: 32KModalities: text
  • Mixtral 8x7B Instruct

    Mistral AI
    Active

    Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion...

    Budget
    Context: 33KModalities: text
  • Pixtral Large 2411

    Mistral AI
    Active

    Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images. The model is...

    Vision
    Context: 131KModalities: text, image
  • Saba

    Mistral AI
    Active

    Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance. Trained on curated regional...

    Budget
    Context: 33KModalities: text
  • Voxtral Small 24B 2507

    Mistral AI
    Active

    Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at speech transcription, translation and audio understanding. Input audio...

    Budget
    Context: 32KModalities: text, audio
  • Llama 3.1 Nemotron 70B

    NVIDIA
    Active

    NVIDIA-tuned Llama 3.1 70B with state-of-the-art alignment and helpfulness.

    AgenticOpen source
    Context: 128KModalities: text
  • Llama 3.1 Nemotron 70B Instruct

    NVIDIA
    Active

    NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinforcement Learning from Human Feedback (RLHF), it excels...

    Context: 131KModalities: text
  • Llama 3.3 Nemotron Super 49B

    NVIDIA
    Active

    49B parameter efficient model with frontier reasoning capability from NVIDIA.

    AgenticOpen source
    Context: 128KModalities: text
  • Llama 3.3 Nemotron Super 49B V1.5

    NVIDIA
    Active

    Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K context. It’s post-trained for agentic workflows (RAG, tool calling) via SFT across math, code, science, and...

    Budget
    Context: 131KModalities: text
  • Nemotron 3 Nano 30B A3B

    NVIDIA
    Active

    NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully...

    Long contextBudget
    Context: 262KModalities: text
  • Nemotron 3 Super

    NVIDIA
    Active

    NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer...

    Long contextBudget
    Context: 262KModalities: text
  • Nemotron Nano 12B 2 VL

    NVIDIA
    Active

    NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s...

    VisionBudget
    Context: 131KModalities: image, text, video
  • Nemotron Nano 9B V2

    NVIDIA
    Active

    NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and...

    Budget
    Context: 131KModalities: text
  • GPT Audio

    OpenAI
    Active

    The gpt-audio model is OpenAI's first generally available audio model. The new snapshot features an upgraded decoder for more natural sounding voices and maintains better voice consistency. Audio is priced...

    Agentic
    Context: 128KModalities: text, audio
  • GPT Audio Mini

    OpenAI
    Active

    A cost-efficient version of GPT Audio. The new snapshot features an upgraded decoder for more natural sounding voices and maintains better voice consistency. Input is priced at $0.60 per million...

    Agentic
    Context: 128KModalities: text, audio
  • GPT-3.5 Turbo

    OpenAI
    Active

    GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.

    Context: 16KModalities: text
  • GPT-3.5 Turbo (older v0613)

    OpenAI
    Active

    GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.

    Context: 4KModalities: text
  • GPT-3.5 Turbo 16k

    OpenAI
    Active

    This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up...

    Context: 16KModalities: text
  • GPT-3.5 Turbo Instruct

    OpenAI
    Active

    This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.

    Context: 4KModalities: text
  • GPT-4

    OpenAI
    Active

    OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning...

    Agentic
    Context: 8KModalities: text
  • GPT-4 (older v0314)

    OpenAI
    Active

    GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.

    Context: 8KModalities: text
  • GPT-4 Turbo

    OpenAI
    Deprecated

    Previous-gen GPT-4 flagship with 128k context, now superseded by GPT-4o.

    VisionAgentic
    Context: 128KModalities: text, image
  • GPT-4.1

    OpenAI
    Active

    OpenAI GPT-4.1

    VisionAgenticLong context
    Context: 1.0MModalities: text, image
  • GPT-4.1 mini

    OpenAI
    Active

    Smaller, faster and cheaper version of GPT-4.1.

    VisionAgenticLong context
    Context: 1.0MModalities: text, image
  • GPT-4.1 nano

    OpenAI
    Active

    Ultra-fast nano variant of GPT-4.1.

    VisionAgenticLong context
    Context: 1.0MModalities: text, image
  • GPT-4o

    OpenAI
    Active

    Fast, multimodal model for general use with 128k context.

    VisionAgentic
    Context: 128KModalities: text, image, audio
  • GPT-4o (2024-05-13)

    OpenAI
    Active

    GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as...

    VisionAgentic
    Context: 128KModalities: text, image, file
  • GPT-4o (2024-08-06)

    OpenAI
    Active

    The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducing-structured-outputs-in-the-api/). GPT-4o ("o" for "omni") is...

    VisionAgentic
    Context: 128KModalities: text, image, file
  • GPT-4o (2024-11-20)

    OpenAI
    Active

    The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with uploaded...

    VisionAgentic
    Context: 128KModalities: text, image, file
  • GPT-4o (extended)

    OpenAI
    Active

    GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as...

    Vision
    Context: 128KModalities: text, image, file
  • GPT-4o mini

    OpenAI
    Active

    Low-cost, fast multimodal model for high-volume tasks.

    77.1Strong
    VisionMathAgenticBudget
    Context: 128KModalities: text, image
  • GPT-4o-mini (2024-07-18)

    OpenAI
    Active

    GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable...

    VisionAgenticBudget
    Context: 128KModalities: text, image, file
  • GPT-5

    OpenAI
    Active

    OpenAI's frontier flagship model with long context and advanced reasoning.

    77.9Strong
    VisionMathAgenticLong contextReasoningCode
    Context: 272KModalities: text, image
  • GPT-5 Chat

    OpenAI
    Active

    GPT-5 Chat is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.

    Vision
    Context: 128KModalities: file, image, text
  • GPT-5 Codex

    OpenAI
    Active

    GPT-5-Codex is a specialized version of GPT-5 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

    VisionAgenticLong context
    Context: 272KModalities: text, image
  • GPT-5 Image

    OpenAI
    Active

    [GPT-5](https://openrouter.ai/openai/gpt-5) Image combines OpenAI's GPT-5 model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code quality, and user experience while incorporating GPT Image 1's superior instruction following,...

    VisionLong context
    Context: 400KModalities: image, text, file
  • GPT-5 Image Mini

    OpenAI
    Active

    GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://openrouter.ai/openai/gpt-5-mini), with GPT Image 1 Mini for efficient image generation. This natively multimodal model features superior instruction following, text...

    VisionLong context
    Context: 400KModalities: file, image, text
  • GPT-5 Mini

    OpenAI
    Active

    GPT-5 Mini is a compact version of GPT-5, designed to handle lighter-weight reasoning tasks. It provides the same instruction-following and safety-tuning benefits as GPT-5, but with reduced latency and cost....

    VisionAgenticLong context
    Context: 272KModalities: text, image, file
  • GPT-5 Nano

    OpenAI
    Active

    GPT-5-Nano is the smallest and fastest variant in the GPT-5 system, optimized for developer tools, rapid interactions, and ultra-low latency environments. While limited in reasoning depth compared to its larger...

    VisionAgenticLong contextBudget
    Context: 272KModalities: text, image, file
  • GPT-5 Pro

    OpenAI
    Active

    GPT-5 Pro is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instruction following, and...

    VisionAgentic
    Context: 128KModalities: image, text, file
  • GPT-5.1

    OpenAI
    Active

    OpenAI GPT-5.1.

    VisionAgenticLong context
    Context: 272KModalities: text, image
  • GPT-5.1 Chat

    OpenAI
    Active

    GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on...

    Vision
    Context: 128KModalities: file, image, text
  • GPT-5.1-Codex

    OpenAI
    Active

    GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

    VisionLong context
    Context: 400KModalities: text, image
  • GPT-5.1-Codex-Max

    OpenAI
    Active

    GPT-5.1-Codex-Max is OpenAI’s latest agentic coding model, designed for long-running, high-context software development tasks. It is based on an updated version of the 5.1 reasoning stack and trained on agentic...

    VisionLong context
    Context: 400KModalities: text, image
  • GPT-5.1-Codex-Mini

    OpenAI
    Active

    GPT-5.1-Codex-Mini is a smaller and faster version of GPT-5.1-Codex

    VisionLong context
    Context: 400KModalities: image, text
  • GPT-5.2

    OpenAI
    Active

    OpenAI GPT-5.2.

    VisionAgenticLong context
    Context: 272KModalities: text, image
  • GPT-5.2 Chat

    OpenAI
    Active

    GPT-5.2 Chat (AKA Instant) is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on...

    Vision
    Context: 128KModalities: file, image, text
  • GPT-5.2 Pro

    OpenAI
    Active

    GPT-5.2 Pro is OpenAI’s most advanced model, offering major improvements in agentic coding and long context performance over GPT-5 Pro. It is optimized for complex tasks that require step-by-step reasoning,...

    VisionLong context
    Context: 400KModalities: image, text, file
  • GPT-5.2-Codex

    OpenAI
    Active

    GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

    VisionLong context
    Context: 400KModalities: text, image
  • GPT-5.3

    OpenAI
    Active

    OpenAI GPT-5.3.

    Agentic
    Modalities: text
  • GPT-5.3 Chat

    OpenAI
    Active

    GPT-5.3 Chat is an update to ChatGPT's most-used model that makes everyday conversations smoother, more useful, and more directly helpful. It delivers more accurate answers with better contextualization and significantly...

    Vision
    Context: 128KModalities: text, image, file
  • GPT-5.3-Codex

    OpenAI
    Active

    GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, combining the frontier software engineering performance of GPT-5.2-Codex with the broader reasoning and professional knowledge capabilities of GPT-5.2. It achieves state-of-the-art results...

    VisionLong context
    Context: 400KModalities: text, image, file
  • GPT-5.4

    OpenAI
    Active

    OpenAI GPT-5.4.

    59.3Competent
    VisionAgenticLong context
    Context: 1.1MModalities: text, image
  • GPT-5.4 Pro

    OpenAI
    Active

    GPT-5.4 Pro is OpenAI's most advanced model, building on GPT-5.4's unified architecture with enhanced reasoning capabilities for complex, high-stakes tasks. It features a 1M+ token context window (922K input, 128K...

    VisionLong context
    Context: 1.1MModalities: text, image, file
  • GPT-5.4 mini

    OpenAI
    Active

    Cost-efficient variant of GPT-5.4.

    34.6Limited
    VisionAgenticLong context
    Context: 272KModalities: text, image
  • GPT-5.4 nano

    OpenAI
    Active

    Ultra-fast nano variant of GPT-5.4.

    43.3Basic
    VisionAgenticLong context
    Context: 272KModalities: text, image
  • OpenAI: GPT-5.4 Image 2

    OpenAI
    Active

    [GPT-5.4](https://openrouter.ai/openai/gpt-5.4) Image 2 combines OpenAI's GPT-5.4 model with state-of-the-art image generation capabilities from GPT Image 2. It enables rich multimodal workflows, allowing users to seamlessly move between reasoning, coding, and...

    Context: 272KModalities: image, text, file
  • gpt-oss-120b

    OpenAI
    Active

    gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized...

    FastBudget
    Context: 131KModalities: text
  • gpt-oss-20b

    OpenAI
    Active

    gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for...

    FastBudget
    Context: 131KModalities: text
  • o1

    OpenAI
    Active

    Reasoning-focused model that thinks before answering.

    76.3Strong
    VisionMathAgenticLong contextReasoning
    Context: 200KModalities: text, image
  • o1-mini

    OpenAI
    Deprecated

    Smaller o1-series reasoning model, now superseded by o3-mini.

    Context: 128KModalities: text
  • o1-pro

    OpenAI
    Active

    The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide...

    VisionAgenticLong context
    Context: 200KModalities: text, image, file
  • o3

    OpenAI
    Active

    OpenAI's most powerful reasoning model, successor to o1.

    83.5Frontier
    VisionMathAgenticLong contextFrontierReasoning
    Context: 200KModalities: text, image
  • o3 Deep Research

    OpenAI
    Active

    o3-deep-research is OpenAI's advanced model for deep research, designed to tackle complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

    VisionAgenticLong context
    Context: 200KModalities: image, text, file
  • o3 Mini High

    OpenAI
    Active

    OpenAI o3-mini-high is the same model as [o3-mini](/openai/o3-mini) with reasoning_effort set to high. o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and...

    VisionLong context
    Context: 200KModalities: text, file
  • o3-mini

    OpenAI
    Active

    Cost-efficient reasoning model with strong STEM performance.

    74.5Strong
    MathAgenticLong contextCode
    Context: 200KModalities: text
  • o3-pro

    OpenAI
    Active

    Highest capability reasoning model.

    VisionAgenticLong contextCode
    Context: 200KModalities: text, image
  • o4 Mini Deep Research

    OpenAI
    Active

    o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

    VisionAgenticLong context
    Context: 200KModalities: file, image, text
  • o4 Mini High

    OpenAI
    Active

    OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining...

    VisionLong context
    Context: 200KModalities: image, text, file
  • o4-mini

    OpenAI
    Active

    Fast reasoning model.

    VisionAgenticLong contextCode
    Context: 200KModalities: text, image
  • Grok 2

    xAI
    Deprecated

    Previous generation Grok model, superseded by Grok 3.

    VisionAgentic
    Context: 131KModalities: text, image
  • Grok 3

    xAI
    Active

    xAI's frontier reasoning model with real-time web access.

    80.1Frontier
    VisionMathAgenticFrontierReasoning
    Context: 131KModalities: text, image
  • Grok 3 Beta

    xAI
    Active

    Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in...

    Context: 131KModalities: text
  • Grok 3 Mini

    xAI
    Active

    Fast and cost-efficient reasoning model.

    Agentic
    Modalities: text
  • Grok 3 Mini Beta

    xAI
    Active

    Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t demand...

    Budget
    Context: 131KModalities: text
  • Grok 4

    xAI
    Active

    Most capable Grok model.

    AgenticLong context
    Context: 256KModalities: text
  • Grok 4 Fast

    xAI
    Active

    High-speed variant of Grok 4.

    Agentic
    Modalities: text
  • Grok 4.1 Fast

    xAI
    Active

    Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. 2M context window. Reasoning can be enabled/disabled using...

    VisionLong contextBudget
    Context: 2MModalities: text, image, file
  • Grok 4.20

    xAI
    Active

    Grok 4.20 is xAI's newest flagship model with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherance, delivering consistently...

    VisionLong context
    Context: 2MModalities: text, image, file
  • Grok 4.20 Multi-Agent

    xAI
    Active

    Grok 4.20 Multi-Agent is a variant of xAI’s Grok 4.20 designed for collaborative, agent-based workflows. Multiple agents operate in parallel to conduct deep research, coordinate tool use, and synthesize information...

    VisionLong context
    Context: 2MModalities: text, image, file
  • Grok Code Fast 1

    xAI
    Active

    Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the response, developers can steer Grok Code for high-quality...

    Long context
    Context: 256KModalities: text