Mistral Release Notes

Last updated: Dec 23, 2025

Mistral Products

All Mistral Release Notes

  • Dec 22, 2025
    • Parsed from source:
      Dec 22, 2025
    • Detected by Releasebot:
      Dec 23, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.8: Backward comp

    Full Changelog: v1.8.7...v1.8.8

    • Add new token logic asrstr by @patrickvonplaten in #172

    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173

    Original source Report a problem
  • Dec 22, 2025
    • Parsed from source:
      Dec 22, 2025
    • Detected by Releasebot:
      Dec 23, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.7: Refactoring and bug fixes.

    Version 1.8.7

    • Remove the index field from assistant tool_calls. by @tobrun in #165
    • Rename get control -> to get special & add is_special by @patrickvonplaten in #164
    • Add TextChunk support to ToolMessage by @juliendenize in #170
    • Version 1.8.7 by @juliendenize in #171

    New Contributors

    • @tobrun made their first contribution in #165

    Full Changelog

    v1.8.6...v1.8.7

    Original source Report a problem
  • Dec 17, 2025
    • Parsed from source:
      Dec 17, 2025
    • Detected by Releasebot:
      Dec 18, 2025
    Mistral logo

    Mistral

    December 17

    We released OCR 3 (mistral-ocr-2512).

    MODEL RELEASED

    • Introducing table_format in our OCR API, allowing you to choose between markdown and html for table formatting.

    API UPDATED

    • Introducing extract_footer, extract_header in our OCR API, as well as hyperlinks output.
    Original source Report a problem
  • Dec 15, 2025
    • Parsed from source:
      Dec 15, 2025
    • Detected by Releasebot:
      Dec 17, 2025
    Mistral logo

    Mistral

    December 15

    MODEL RELEASED

    We released Mistral Small Creative (labs-mistral-small-creative) as a Labs model.

    Original source Report a problem
  • Dec 9, 2025
    • Parsed from source:
      Dec 9, 2025
    • Detected by Releasebot:
      Dec 10, 2025
    Mistral logo

    Mistral

    Introducing: Devstral 2 and Mistral Vibe CLI

    Devstral 2 launches a new open‑source coding model family with 123B and 24B sizes plus a native Mistral Vibe CLI for end‑to‑end code automation. Open licenses, free API access now, and strong on‑device options mark a bold step for open source code agents.

    Devstral2

    Mistral Vibe CLI
    State-of-the-art, open-source agentic coding models and CLI agent.

    Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.
    Devstral 2 is currently free to use via our API.
    We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.

    Highlights.

    • Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
    • Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.
    • Mistral Vibe CLI: Native, open-source agent in your terminal solving software engineering tasks autonomously.
    • Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.
    • Compatible with on-prem deployment and custom fine-tuning.

    Devstral: the next generation of SOTA coding.

    Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.
    Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.

    Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.

    Built for production-grade workflows.

    Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.
    The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.
    We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.

    “Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.
    “Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.
    Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.

    Mistral Vibe CLI.

    Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.
    Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:

    • Project-aware context: Automatically scans your file structure and Git status to provide relevant context
    • Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes
    • Multi-file orchestration: Understands your entire codebase—not just the file you're editing—enabling architecture-level reasoning that can halve your PR cycle time
    • Persistent history, autocompletion, and customizable themes.
      You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.

    Get started.

    Devstral 2 is currently offered free via our API. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.
    We’ve partnered with leading, open agent tools Kilo Code and Cline to bring Devstral 2 to where you already build.
    Mistral Vibe CLI is available as an extension in Zed, so you can use it directly inside your IDE.

    Recommended deployment for Devstral.

    Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on build.nvidia.com. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.
    Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.
    For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for Mistral Vibe CLI.

    Contact us.

    We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!
    Share your projects, questions, or discoveries with us on X/Twitter, Discord, or GitHub.

    We’re hiring!

    If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to apply to join our team.

    Original source Report a problem
  • Dec 8, 2025
    • Parsed from source:
      Dec 8, 2025
    • Detected by Releasebot:
      Dec 10, 2025
    • Modified by Releasebot:
      Dec 17, 2025
    Mistral logo

    Mistral

    December 8

    We released Devstral 2 (devstral-2512) and Devstral Small 2 (labs-devstral-small-2512).

    MODEL RELEASED

    We released Mistral Vibe.

    OTHER

    Original source Report a problem
  • Dec 1, 2025
    • Parsed from source:
      Dec 1, 2025
    • Detected by Releasebot:
      Dec 3, 2025
    Mistral logo

    Mistral

    December 1

    MODEL RELEASED

    We released Mistral Large 3 (mistral-large-2512) and Ministral 3 (ministral-3b-2512, ministral-8b-2512 and ministral-14b-2512).

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 18, 2025
    Mistral logo

    Mistral

    Introducing Mistral OCR 3

    Mistral OCR 3 debuts with breakthrough accuracy and efficiency across handwriting, forms, scans, and complex tables. It powers Document AI Playground, outputs markdown with HTML tables, and API access, at $2 per 1,000 pages with a 50% Batch-API discount.

    Highlights

    • Breakthrough performance: 74% overall win rate over Mistral OCR 2 on forms, scanned documents, complex tables, and handwriting.
    • State-of-the-art accuracy, outperforming both enterprise document processing solutions as well as AI-native OCR solutions
    • Now powers Document AI Playground in Mistral AI Studio, a simple drag-and-drop interface for parsing PDFs/images into clean text or structured JSON
    • Major upgrade over Mistral OCR 2 in forms, handwritten content, low-quality scans, and tables

    Overview

    Mistral OCR 3 is designed to extract text and embedded images from a wide range of documents with exceptional fidelity. It supports markdown output enriched with HTML-based table reconstruction, enabling downstream systems to understand not just document content, but also structure. As a much smaller model than most competitive solutions, it is available at an industry-leading price of $2 per 1,000 pages, with a 50% Batch-API discount, reducing the cost to $1 per 1,000 pages.

    Developers can integrate the model (mistral-ocr-2512) via API, and users can leverage Document AI, a UI that parses documents into text or structured JSON instantly.

    Mistral OCR 3 is a significant upgrade across all languages and document form factors compared to Mistral OCR 2.

    Upgrades over previous generations of OCR models

    Whereas most OCR solutions today specialize in specific document types, Mistral OCR 3 is designed to excel at processing the vast majority of document types in organizations and everyday settings.

    • Handwriting: Mistral OCR accurately interprets cursive, mixed-content annotations, and handwritten text layered over printed forms.
    • Forms: Improved detection of boxes, labels, handwritten entries, and dense layouts. Works well on invoices, receipts, compliance forms, government documents, and such.
    • Scanned & complex documents: Significantly more robust to compression artifacts, skew, distortion, low DPI, and background noise.
    • Complex tables: Reconstructs table structures with headers, merged cells, multi-row blocks, and column hierarchies. Outputs HTML table tags with colspan/rowspan to fully preserve layout.

    Recommend use cases and applications

    Mistral OCR 3 is ideal for both high-volume enterprise pipelines and interactive document workflows. Developers can use it for:

    • Extracting text and images into markdown for downstream agents and knowledge systems
    • Automated parsing of forms, invoices, and operational documents
    • End-to-end document understanding pipelines
    • Digitization of handwritten or historical documents
    • Any other document → knowledge transformation applications.

    Our early customers are using Mistral OCR 3 to process invoices into structured fields, digitize company archives, extract clean text from technical and scientific reports, and improve enterprise search.

    “OCR remains foundational for enabling generative AI and agentic AI,” said Tim Law, IDC Director of Research for AI and Automation. “Those organizations that can efficiently and cost-effectively extract text and embedded images with high fidelity will unlock value and will gain a competitive advantage from their data by providing richer context.”

    Available today

    Access the model either through the API or via the new Document AI Playground interface, both in Mistral AI Studio. Mistral OCR 3 is fully backward compatible with Mistral OCR 2. For more details, head over to mistral.ai/docs.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 2, 2025
    Mistral logo

    Mistral

    Introducing Mistral 3

    Mistral AI unveils Mistral 3 family with Ministral 3 (3B, 8B, 14B) and Large 3, all open-source under Apache 2.0. Expect frontier multimodal multilingual AI with edge-to-center deployment and today’s availability on major platforms.

    The next generation of open multimodal and multilingual AI

    Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 – our most capable model to date – a sparse mixture-of-experts trained with 41B active and 675B total parameters. All models are released under the Apache 2.0 license. Open-sourcing our models in a variety of compressed formats empowers the developer community and puts AI in people’s hands through distributed intelligence.

    The Ministral models represent the best performance-to-cost ratio in their category. At the same time, Mistral Large 3 joins the ranks of frontier instruction-fine-tuned open-source models.

    Mistral Large 3: A state-of-the-art open model

    Mistral Large 3 is one of the best permissive open weight models in the world, trained from scratch on 3000 of NVIDIA’s H200 GPUs. Mistral Large 3 is Mistral’s first mixture-of-experts model since the seminal Mixtral series, and represents a substantial step forward in pretraining at Mistral. After post-training, the model achieves parity with the best instruction-tuned open-weight models on the market on general prompts, while also demonstrating image understanding and best-in-class performance on multilingual conversations (i.e., non-English/Chinese).

    Mistral Large 3 debuts at #2 in the OSS non-reasoning models category (#6 amongst OSS models overall) on the LMArena leaderboard.

    We release both the base and instruction fine-tuned versions of Mistral Large 3 under the Apache 2.0 license, providing a strong foundation for further customization across the enterprise and developer communities. A reasoning version is coming soon!

    Ministral 3: State-of-the-art intelligence at the edge

    For edge and local use cases, we release the Ministral 3 series, available in three model sizes: 3B, 8B, and 14B parameters. Furthermore, for each model size, we release base, instruct, and reasoning variants to the community, each with image understanding capabilities, all under the Apache 2.0 license. When married with the models’ native multimodal and multilingual capabilities, the Ministral 3 family offers a model for all enterprise or developer needs.

    Furthermore, Ministral 3 achieves the best cost-to-performance ratio of any OSS model. In real-world use cases, both the number of generated tokens and model size matter equally. The Ministral instruct models match or exceed the performance of comparable models while often producing an order of magnitude fewer tokens.

    For settings where accuracy is the only concern, the Ministral reasoning variants can think longer to produce state-of-the-art accuracy amongst their weight class - for instance 85% on AIME ‘25 with our 14B variant.

    Available Today

    Mistral 3 is available today on Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face (Large 3 & Ministral), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI, and Together AI. In addition, coming soon on NVIDIA NIM and AWS SageMaker.

    One more thing… customization with Mistral AI

    For organizations seeking tailored AI solutions, Mistral AI offers custom model training services to fine-tune or fully adapt our models to your specific needs. Whether optimizing for domain-specific tasks, enhancing performance on proprietary datasets, or deploying models in unique environments, our team collaborates with you to build AI systems that align with your goals. For enterprise-grade deployments, custom training ensures your AI solution delivers maximum impact securely, efficiently, and at scale.

    Get started with Mistral 3

    The future of AI is open. Mistral 3 redefines what’s possible with a family of models built for frontier intelligence, multimodal flexibility, and unmatched customization. Whether you’re deploying edge-optimized solutions with Ministral 3 or pushing the boundaries of reasoning with Mistral Large 3, this release puts state-of-the-art AI directly into your hands.

    Why Mistral 3?

    • Frontier performance, open access: Achieve closed-source-level results with the transparency and control of open-source models.
    • Multimodal and multilingual: Build applications that understand text, images, and complex logic across 40+ native languages.
    • Scalable efficiency: From 3B to 675B parameters, choose the model that fits your needs, from edge devices to enterprise workflows.
    • Agentic and adaptable: Deploy for coding, creative collaboration, document analysis, or tool-use workflows with precision.

    Next Steps

    • Explore the model documentation:
      • Ministral 3 3B-25-12
      • Ministral 3 8B-25-12
      • Ministral 3 14B-25-12
      • Mistral Large 3
    • Technical documentation for customers is available on our AI Governance Hub
    • Start building: Ministral 3 and Large 3 on Hugging Face, or deploy via Mistral AI’s platform for instant API access and API pricing
    • Customize for your needs: Need a tailored solution? Contact our team to explore fine-tuning or enterprise-grade training.
    • Share your projects, questions, or breakthroughs with us: Twitter/X, Discord, or GitHub.

    Science has always thrived on openness and shared discovery. As pioneering French scientist and two-time Nobel laureate Marie Skłodowska-Curie once said, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.”

    This philosophy drives our mission at Mistral AI. We believe that the future of AI should be built on transparency, accessibility, and collective progress. With this release, we invite the world to explore, build, and innovate with us, unlocking new possibilities in reasoning, efficiency, and real-world applications.

    Together, let’s turn understanding into action.

    Original source Report a problem
  • Nov 30, 2025
    • Parsed from source:
      Nov 30, 2025
    • Detected by Releasebot:
      Dec 1, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.6: rm Python 3.9, bug fixes.

    Version 1.8.6 arrives with cleanup and enhancements like new normalizer and validator utilities, token handling improvements, and stricter third party rights, plus updates to tests and logging. A full changelog signals a shipped release.

    Release notes

    • Remove deprecated imports in docs. by @juliendenize in #138
    • Add normalizer and validator utils by @juliendenize in #140
    • Refactor private aggregate messages for InstructRequestNormalizer by @juliendenize in #141
    • test: improve unit test for is_opencv_installed by @PrasanaaV in #143
    • Optimize spm decode function by @juliendenize in #144
    • Add get_one_valid_tokenizer_file by @juliendenize in #142
    • Remove Python 3.9 support by @juliendenize in #145
    • Correctly pass revision and token to hf_api by @juliendenize in #149
    • Fix assertion in test_convert_text_chunk and tool_call by @patrickvonplaten in #152
    • Pins GH actions by @arcanis in #160
    • Add usage restrictions regarding third-party rights. by @juliendenize in #161
    • Improve tekken logging message for vocabulary by @juliendenize in #162
    • Set version 1.8.6 by @juliendenize in #151

    New Contributors

    • @PrasanaaV made their first contribution in #143
    • @arcanis made their first contribution in #160

    Full Changelog

    v1.8.5...v1.8.6

    Original source Report a problem

Related vendors