AIDiveForge AIDiveForge

Visit Mistral Large 2
Mistral Large 2 product screenshot
via cms.mistral.ai

Share This Tool

Compare This Tool
📋 Embed this tool on your site

Copy this code to embed a compact tool card:

Screenshots 2

Mistral Large 2

FreeFreeText to TextOpen SourceAPISelf-HostedMulti-Model

Pricing

Model
Free
Price
Free

Summary

Mistral's open-source flagship competes with Claude and GPT-4 on reasoning, costs nothing, and fits 128k tokens in memory.

Mistral Large 2 is a general-purpose language model trained to handle complex reasoning, code generation, and multilingual work at the scale enterprises need. It's free to use via API or self-host, sits in the same performance tier as proprietary models from OpenAI and Anthropic, and can ingest documents up to 128,000 tokens long. The core trade-off: it has a knowledge cutoff earlier than competitors and lacks serious vision capabilities, making it less suitable for tasks requiring current events or image understanding. For teams optimizing on cost and reasoning quality rather than breadth of modalities, it's a genuine alternative to paid tiers.

Bottom line: *Use this when reasoning and code matter more than multimodal depth and you want to avoid API fees.*

Community Performance Report Card

No community ratings yet. Be the first to rate this tool!

Best For: Organizations requiring high-quality reasoning at scale, Software development teams needing reliable code generation, Multilingual applications and global enterprises, Technical documentation and knowledge work, Long-context document processing

LLM Spec Sheet

Specializations

CodeMathReasoningMultilingualChatLong ContextAgentsInstruction

Available Models

Mistral Large 2
Knowledge Base: July 2024Context: 128kGeneral purposeReasoningCode
Mistral Medium
Knowledge Base: December 2023Context: 32kGeneral purposeBalanced
Mistral Small
Knowledge Base: September 2024Context: 32kFast inferenceLightweight
Mistral Nemo
Knowledge Base: July 2024Context: 128kEfficientInstruction-following
Codestral
Knowledge Base: April 2024Context: 32kCode generationProgramming
Codestral Mamba
Knowledge Base: April 2024Context: 256kCode generationLong context
Pixtral 12B
Knowledge Base: July 2024Context: 128kVisionMultimodal
Mistral 7B
Knowledge Base: August 2023Context: 32kLightweightOpen source
Mixtral 8x7B
Knowledge Base: April 2024Context: 32kMixture of expertsEfficient
Mixtral 8x22B
Knowledge Base: April 2024Context: 65kMixture of expertsHigh performance

Benchmarks

86.2%MMLU
89.8%HumanEval
48.6%GPQA
128k tokensContext Window

Pricing & Limits

Input price
$2.00 / 1M tokens
Output price
$6.00 / 1M tokens
Max output tokens
8192.0000

Metrics from legacy-discovery, updated .

Community Benchmarks Community

No community benchmarks yet. Be the first to share a real-world data point.

Changelog

  • MMLU first recorded at 86.2% · legacy-discovery
  • Max output first recorded at 8.2k tokens · legacy-discovery
  • Input price first recorded at $2.00/1M · artificialanalysis
  • Output price first recorded at $6.00/1M · artificialanalysis
  • Context first recorded at 128k tokens · artificialanalysis
  • HumanEval first recorded at 89.8% · artificialanalysis
  • GPQA first recorded at 48.6% · artificialanalysis
  • 128k token context window for extensive document handling
  • Strong performance on reasoning and mathematics benchmarks
  • Efficient inference with competitive latency
  • Excellent multilingual capabilities
  • Cost-effective compared to some competing flagship models
  • Smaller knowledge base cutoff compared to some competitors
  • Limited vision/multimodal capabilities compared to GPT-4V or Claude 3.5 Vision

Community Reviews

No reviews yet. Be the first to share your experience.

About

Platforms
Web, API
Languages
Multilingual (including English
API Available
Yes
Self-Hosted
Yes
Last Updated
2026-04-08T04:59:39.624Z

Best For

Who it's for

  • Organizations requiring high-quality reasoning at scale
  • Software development teams needing reliable code generation
  • Multilingual applications and global enterprises
  • Technical documentation and knowledge work
  • Long-context document processing

What it does well

  • Enterprise reasoning and analysis
  • Code generation and software development
  • Document summarization and analysis
  • Multilingual customer support
  • Complex mathematical problem-solving

Integrations

LangChainLlamaIndexHugging FaceVS Code

Discussion Community

No discussion yet. Sign in to start the conversation.

Frequently Asked Questions

Is Mistral Large 2 free?
Yes — Mistral Large 2 is fully free to use. There is no paid tier.
Is Mistral Large 2 open source?
Yes. Mistral Large 2 is open source — the source repository is at https://github.com/mistralai/mistral-src.
Does Mistral Large 2 have an API?
Yes. Mistral Large 2 exposes a developer API. See the official documentation at https://mistral.ai for details.
Can I self-host Mistral Large 2?
Yes. Mistral Large 2 supports self-hosting on your own infrastructure.
What are the alternatives to Mistral Large 2?
Common alternatives include Claude 3.5 Sonnet, GPT-4 Turbo, Llama 2. Compare them on AIDiveForge for pricing, features, and platform support.
When was Mistral Large 2 released?
Mistral Large 2 was first released in 2024.
What platforms does Mistral Large 2 support?
Mistral Large 2 is available on: Web, API.

Spotted incorrect or missing data? Join our community of contributors.

Sign Up to Contribute

Community Notes & Tips Community

Be the first to contribute. General notes, observations, gotchas, and tips from people who use this tool day-to-day.

Used in Workflow PacksComing soon — see which automation workflows use this tool.
paid

Hours Saved & ROI Stories Community

Be the first to contribute. Concrete time/cost savings, with context. e.g. "Cut my code review backlog from 4h to 45m per week."

Mistral Large 2 is Mistral AI’s flagship LLM, built to deliver advanced capabilities across reasoning, mathematics, coding, and multilingual understanding. It represents a significant upgrade over the original Mistral Large, with improved performance on standardized benchmarks and enhanced instruction-following. The model supports a 128k token context window, enabling processing of long documents and complex multi-turn conversations. Mistral Large 2 is optimized for enterprise applications requiring high-quality generations with low latency, and it maintains Mistral’s commitment to efficient inference. The model excels at handling nuanced instructions and producing coherent, well-structured outputs across diverse domains including creative writing, technical documentation, and data analysis.