Mistral Release Notes

Last updated: Feb 12, 2026

Mistral Products

All Mistral Release Notes (66)

  • Feb 12, 2026
    • Date parsed from source:
      Feb 12, 2026
    • First seen by Releasebot:
      Feb 12, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.9.1 Patch Release

    Refactor online streaming processing and allow for dynamic streaming delay

    What's Changed

    • Add AGENTS.md by @juliendenize in #182
    • fix: correct typos 'occurence' and 'recieved' by @thecaptain789 in #185
    • [Audio] Refactor streaming logic by @patrickvonplaten in #187

    New Contributors

    • @thecaptain789 made their first contribution in #185

    Full Changelog: v1.9.0...v1.9.1

    Original source Report a problem
  • Feb 3, 2026
    • Date parsed from source:
      Feb 3, 2026
    • First seen by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.9.0 - Stream my audio 🎙️

    Mistral-Common adds streaming audio processing and realtime transcription support with Voxtral Mini. The changelog highlights token and padding improvements, an audio encoder addition, accessibility tweaks, and releases up to v1.9.0 featuring audio streaming.

    Mistral-Common can now process streaming requests

    import numpy as np
    from mistral_common.audio import Audio
    from mistral_common.protocol.instruct.chunk import RawAudio
    from mistral_common.protocol.transcription.request import StreamingMode, TranscriptionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    # 1. Load the tokenizer with audio support
    tokenizer = MistralTokenizer.from_hf_hub("mistralai/Voxtral-Mini-4B-Realtime-2602")
    # 2. Create sample audio data (or load from a file)
    sampling_rate = 16_000
    duration_s = 2.0
    audio_array = np.random.uniform(-1, 1, size=int(duration_s * sampling_rate)).astype(np.float32)
    audio = Audio(audio_array=audio_array, sampling_rate=sampling_rate, format="wav")
    # 3. Create the streaming transcription request
    request = TranscriptionRequest(
        audio=RawAudio(data=audio.to_base64("wav"), format="wav"),
        streaming=StreamingMode.ONLINE,  # or StreamingMode.OFFLINE
        language=None,
    )
    # 4. Encode the request
    tokenized = tokenizer.encode_transcription(request)
    # 5. Access the results
    print(f"Tokens: {tokenized.tokens}")
    print(f"Number of tokens: {len(tokenized.tokens)}")
    print(f"Number of audio segments: {len(tokenized.audios)}")
    

    See https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602 for more info.

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173
    • Release 1.8.8 by @juliendenize in #174
    • [Audio] Update padding by @patrickvonplaten in #175
    • [Audio] Improve padding for streaming by @patrickvonplaten in #177
    • Add audio_encoder to Tokenizer V13 by @amosyou in #180
    • Release v1.9.0 - Audio streaming by @patrickvonplaten in #179
    • Fix image tests with downloads by @juliendenize in #181
    • Enhance accessibility by @juliendenize in #176

    New Contributors

    • @amosyou made their first contribution in #180

    Full Changelog: v1.8.7...v1.9.0

    Original source Report a problem
  • All of your release notes in one place

    Join Releasebot and get updates from Mistral and hundreds of other software products.

  • Jan 27, 2026
    • Date parsed from source:
      Jan 27, 2026
    • First seen by Releasebot:
      Jan 28, 2026
    Mistral logo

    Mistral

    Terminally online Mistral Vibe.

    Mistral Vibe 2.0 debuts as a major terminal-native coding agent upgrade, introducing custom subagents, multi-choice clarifications, slash-command skills, and unified agent modes. Now available on Le Chat Pro and Team with PAYG or BYOK, plus Devstral 2 paid API access.

    Today, we're releasing Mistral Vibe 2.0—a major upgrade to our terminal-native coding agent, powered by the state-of-the-art Devstral 2 model family. Build custom subagents, clarify before you execute, load skills with slash commands, and configure your own workflows to match how you work.

    Vibe powers you and your team to build, maintain, and ship code faster.

    Mistral Vibe is now available on the Le Chat Pro and Team plans—with pay-as-you-go credits for power use, or bring your own API key.

    Highlights

    • Mistral Vibe 2.0: Custom subagents, multi-choice clarifications, slash-command skills, unified agent modes, and automatic updates.
    • Available today on Le Chat Pro and Team plans with PAYG for extra usage, or BYOK.
    • Devstral 2 moves to paid API access: Free on the Experiment plan in Mistral Studio.
    • Enterprise services: fine-tuning, reinforcement learning, and code modernization.

    What's new in Vibe

    Mistral Vibe already gives you terminal-native code automation with natural language commands, multi-file orchestration, smart references, and full codebase context. With 2.0, we're adding the controls to make it yours.

    Custom subagents: Build specialized agents for targeted tasks—deploy scripts, PR reviews, test generation—and invoke them on demand.

    Multi-choice clarifications: Vibe asks before it acts. When intent is ambiguous, it prompts with options instead of guessing.

    Slash-command skills: Load skills with / —preconfigured workflows for common tasks like deploying, linting, or generating docs.

    Unified agent modes: Configure custom modes that combine tools, permissions, and behaviors. Switch contexts without switching tools.

    Bug fixes and improvements in the Vibe CLI now ship continuously. No manual updates required.

    Plans and pricing

    Mistral Vibe is available on Le Chat Pro and Team plans with generous usage for full-time development. Subscribers can continue beyond limits with pay-as-you-go at API rates until usage resets. Devstral 2 now moves to paid API access.

    Le Chat Pro

    • $14.99 /month
    • Full access to Mistral Vibe CLI and Devstral 2. Students get 50% off. Ideal for sustained, daily dev work.

    Le Chat Team

    • $24.99 /seat/month
    • Everything in Pro, plus unified billing, management, and priority support.

    API Access

    • Build with Devstral directly via Mistral Studio.

    Input Output

    • Devstral 2 $0.40/M tokens $2.00/M tokens
    • Devstral 2 Small $0.10/M tokens $0.30/M tokens

    Free API usage remains available on the Experiment plan — ideal for testing and prototyping.

    Enterprise add-ons

    For teams with advanced needs, we offer fine-tuning on internal languages and DSLs, reinforcement learning with your own environment, and end-to-end code modernization—migrate entire codebases to modern stacks without losing business logic or breaking behavior. We already deliver these solutions for some of the world's largest organizations in finance, defense, and infrastructure.

    Contact us to learn more.

    Get started

      1. Install the Vibe CLI in your terminal:
      curl -LsSf https://mistral.ai/vibe/install.sh | bash
      uv tool install mistral-vibe
      
      1. Sign up to get started to unlock full access.
      1. Start building: run vibe in your terminal.
      1. Explore the documentation or join us on X for updates.

    We’re hiring.

    If you want to build world-class AI products with us, we'd love to hear from you.
    Apply to join our team.

    Original source Report a problem
  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.8.8: Backward comp

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173

    Full Changelog: v1.8.7...v1.8.8

    Original source Report a problem
  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.8.7: Refactoring and bug fixes.

    What's Changed

    • Remove the index field from assistant tool_calls. by @tobrun in #165
    • Rename get control -> to get special & add is_special by @patrickvonplaten in #164
    • Add TextChunk support to ToolMessage by @juliendenize in #170
    • Version 1.8.7 by @juliendenize in #171

    New Contributors

    • @tobrun made their first contribution in #165

    Full Changelog: v1.8.6...v1.8.7

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 18, 2025
    Mistral logo

    Mistral

    Introducing Mistral OCR 3

    Mistral OCR 3 debuts with breakthrough accuracy and efficiency across handwriting, forms, scans, and complex tables. It powers Document AI Playground, outputs markdown with HTML tables, and API access, at $2 per 1,000 pages with a 50% Batch-API discount.

    Highlights

    • Breakthrough performance: 74% overall win rate over Mistral OCR 2 on forms, scanned documents, complex tables, and handwriting.
    • State-of-the-art accuracy, outperforming both enterprise document processing solutions as well as AI-native OCR solutions
    • Now powers Document AI Playground in Mistral AI Studio, a simple drag-and-drop interface for parsing PDFs/images into clean text or structured JSON
    • Major upgrade over Mistral OCR 2 in forms, handwritten content, low-quality scans, and tables

    Overview

    Mistral OCR 3 is designed to extract text and embedded images from a wide range of documents with exceptional fidelity. It supports markdown output enriched with HTML-based table reconstruction, enabling downstream systems to understand not just document content, but also structure. As a much smaller model than most competitive solutions, it is available at an industry-leading price of $2 per 1,000 pages, with a 50% Batch-API discount, reducing the cost to $1 per 1,000 pages.

    Developers can integrate the model (mistral-ocr-2512) via API, and users can leverage Document AI, a UI that parses documents into text or structured JSON instantly.

    Mistral OCR 3 is a significant upgrade across all languages and document form factors compared to Mistral OCR 2.

    Upgrades over previous generations of OCR models

    Whereas most OCR solutions today specialize in specific document types, Mistral OCR 3 is designed to excel at processing the vast majority of document types in organizations and everyday settings.

    • Handwriting: Mistral OCR accurately interprets cursive, mixed-content annotations, and handwritten text layered over printed forms.
    • Forms: Improved detection of boxes, labels, handwritten entries, and dense layouts. Works well on invoices, receipts, compliance forms, government documents, and such.
    • Scanned & complex documents: Significantly more robust to compression artifacts, skew, distortion, low DPI, and background noise.
    • Complex tables: Reconstructs table structures with headers, merged cells, multi-row blocks, and column hierarchies. Outputs HTML table tags with colspan/rowspan to fully preserve layout.

    Recommend use cases and applications

    Mistral OCR 3 is ideal for both high-volume enterprise pipelines and interactive document workflows. Developers can use it for:

    • Extracting text and images into markdown for downstream agents and knowledge systems
    • Automated parsing of forms, invoices, and operational documents
    • End-to-end document understanding pipelines
    • Digitization of handwritten or historical documents
    • Any other document → knowledge transformation applications.

    Our early customers are using Mistral OCR 3 to process invoices into structured fields, digitize company archives, extract clean text from technical and scientific reports, and improve enterprise search.

    “OCR remains foundational for enabling generative AI and agentic AI,” said Tim Law, IDC Director of Research for AI and Automation. “Those organizations that can efficiently and cost-effectively extract text and embedded images with high fidelity will unlock value and will gain a competitive advantage from their data by providing richer context.”

    Available today

    Access the model either through the API or via the new Document AI Playground interface, both in Mistral AI Studio. Mistral OCR 3 is fully backward compatible with Mistral OCR 2. For more details, head over to mistral.ai/docs.

    Original source Report a problem
  • Dec 17, 2025
    • Date parsed from source:
      Dec 17, 2025
    • First seen by Releasebot:
      Dec 18, 2025
    Mistral logo

    Mistral

    December 17

    We released OCR 3 (mistral-ocr-2512).

    MODEL RELEASED

    • Introducing table_format in our OCR API, allowing you to choose between markdown and html for table formatting.

    API UPDATED

    • Introducing extract_footer, extract_header in our OCR API, as well as hyperlinks output.
    Original source Report a problem
  • Dec 15, 2025
    • Date parsed from source:
      Dec 15, 2025
    • First seen by Releasebot:
      Dec 17, 2025
    Mistral logo

    Mistral

    December 15

    MODEL RELEASED

    We released Mistral Small Creative (labs-mistral-small-creative) as a Labs model.

    Original source Report a problem
  • Dec 9, 2025
    • Date parsed from source:
      Dec 9, 2025
    • First seen by Releasebot:
      Dec 10, 2025
    Mistral logo

    Mistral

    Introducing: Devstral 2 and Mistral Vibe CLI

    Devstral 2 launches a new open‑source coding model family with 123B and 24B sizes plus a native Mistral Vibe CLI for end‑to‑end code automation. Open licenses, free API access now, and strong on‑device options mark a bold step for open source code agents.

    Devstral2

    Mistral Vibe CLI
    State-of-the-art, open-source agentic coding models and CLI agent.

    Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.
    Devstral 2 is currently free to use via our API.
    We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.

    Highlights.

    • Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
    • Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.
    • Mistral Vibe CLI: Native, open-source agent in your terminal solving software engineering tasks autonomously.
    • Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.
    • Compatible with on-prem deployment and custom fine-tuning.

    Devstral: the next generation of SOTA coding.

    Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.
    Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.

    Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.

    Built for production-grade workflows.

    Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.
    The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.
    We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.

    “Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.
    “Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.
    Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.

    Mistral Vibe CLI.

    Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.
    Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:

    • Project-aware context: Automatically scans your file structure and Git status to provide relevant context
    • Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes
    • Multi-file orchestration: Understands your entire codebase—not just the file you're editing—enabling architecture-level reasoning that can halve your PR cycle time
    • Persistent history, autocompletion, and customizable themes.
      You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.

    Get started.

    Devstral 2 is currently offered free via our API. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.
    We’ve partnered with leading, open agent tools Kilo Code and Cline to bring Devstral 2 to where you already build.
    Mistral Vibe CLI is available as an extension in Zed, so you can use it directly inside your IDE.

    Recommended deployment for Devstral.

    Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on build.nvidia.com. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.
    Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.
    For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for Mistral Vibe CLI.

    Contact us.

    We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!
    Share your projects, questions, or discoveries with us on X/Twitter, Discord, or GitHub.

    We’re hiring!

    If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to apply to join our team.

    Original source Report a problem
  • Dec 8, 2025
    • Date parsed from source:
      Dec 8, 2025
    • First seen by Releasebot:
      Dec 10, 2025
    • Modified by Releasebot:
      Dec 17, 2025
    Mistral logo

    Mistral

    December 8

    We released Devstral 2 (devstral-2512) and Devstral Small 2 (labs-devstral-small-2512).

    MODEL RELEASED

    We released Mistral Vibe.

    OTHER

    Original source Report a problem

Related vendors