Mistral Common Release Notes

Last updated: Feb 12, 2026

  • Feb 12, 2026
    • Date parsed from source:
      Feb 12, 2026
    • First seen by Releasebot:
      Feb 12, 2026

    Mistral Common by Mistral

    v1.9.1 Patch Release

    Refactor online streaming processing and allow for dynamic streaming delay

    What's Changed

    • Add AGENTS.md by @juliendenize in #182
    • fix: correct typos 'occurence' and 'recieved' by @thecaptain789 in #185
    • [Audio] Refactor streaming logic by @patrickvonplaten in #187

    New Contributors

    • @thecaptain789 made their first contribution in #185

    Full Changelog: v1.9.0...v1.9.1

    Original source Report a problem
  • Feb 3, 2026
    • Date parsed from source:
      Feb 3, 2026
    • First seen by Releasebot:
      Feb 3, 2026

    Mistral Common by Mistral

    v1.9.0 - Stream my audio 🎙️

    Mistral-Common adds streaming audio processing and realtime transcription support with Voxtral Mini. The changelog highlights token and padding improvements, an audio encoder addition, accessibility tweaks, and releases up to v1.9.0 featuring audio streaming.

    Mistral-Common can now process streaming requests

    import numpy as np
    from mistral_common.audio import Audio
    from mistral_common.protocol.instruct.chunk import RawAudio
    from mistral_common.protocol.transcription.request import StreamingMode, TranscriptionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    # 1. Load the tokenizer with audio support
    tokenizer = MistralTokenizer.from_hf_hub("mistralai/Voxtral-Mini-4B-Realtime-2602")
    # 2. Create sample audio data (or load from a file)
    sampling_rate = 16_000
    duration_s = 2.0
    audio_array = np.random.uniform(-1, 1, size=int(duration_s * sampling_rate)).astype(np.float32)
    audio = Audio(audio_array=audio_array, sampling_rate=sampling_rate, format="wav")
    # 3. Create the streaming transcription request
    request = TranscriptionRequest(
        audio=RawAudio(data=audio.to_base64("wav"), format="wav"),
        streaming=StreamingMode.ONLINE,  # or StreamingMode.OFFLINE
        language=None,
    )
    # 4. Encode the request
    tokenized = tokenizer.encode_transcription(request)
    # 5. Access the results
    print(f"Tokens: {tokenized.tokens}")
    print(f"Number of tokens: {len(tokenized.tokens)}")
    print(f"Number of audio segments: {len(tokenized.audios)}")
    

    See https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602 for more info.

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173
    • Release 1.8.8 by @juliendenize in #174
    • [Audio] Update padding by @patrickvonplaten in #175
    • [Audio] Improve padding for streaming by @patrickvonplaten in #177
    • Add audio_encoder to Tokenizer V13 by @amosyou in #180
    • Release v1.9.0 - Audio streaming by @patrickvonplaten in #179
    • Fix image tests with downloads by @juliendenize in #181
    • Enhance accessibility by @juliendenize in #176

    New Contributors

    • @amosyou made their first contribution in #180

    Full Changelog: v1.8.7...v1.9.0

    Original source Report a problem
  • All of your release notes in one place

    Join Releasebot and get updates from Mistral and hundreds of other software products.

  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026

    Mistral Common by Mistral

    v1.8.8: Backward comp

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173

    Full Changelog: v1.8.7...v1.8.8

    Original source Report a problem
  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026

    Mistral Common by Mistral

    v1.8.7: Refactoring and bug fixes.

    What's Changed

    • Remove the index field from assistant tool_calls. by @tobrun in #165
    • Rename get control -> to get special & add is_special by @patrickvonplaten in #164
    • Add TextChunk support to ToolMessage by @juliendenize in #170
    • Version 1.8.7 by @juliendenize in #171

    New Contributors

    • @tobrun made their first contribution in #165

    Full Changelog: v1.8.6...v1.8.7

    Original source Report a problem
  • Nov 30, 2025
    • Date parsed from source:
      Nov 30, 2025
    • First seen by Releasebot:
      Dec 1, 2025

    Mistral Common by Mistral

    v1.8.6: rm Python 3.9, bug fixes.

    Version 1.8.6 arrives with cleanup and enhancements like new normalizer and validator utilities, token handling improvements, and stricter third party rights, plus updates to tests and logging. A full changelog signals a shipped release.

    Release notes

    • Remove deprecated imports in docs. by @juliendenize in #138
    • Add normalizer and validator utils by @juliendenize in #140
    • Refactor private aggregate messages for InstructRequestNormalizer by @juliendenize in #141
    • test: improve unit test for is_opencv_installed by @PrasanaaV in #143
    • Optimize spm decode function by @juliendenize in #144
    • Add get_one_valid_tokenizer_file by @juliendenize in #142
    • Remove Python 3.9 support by @juliendenize in #145
    • Correctly pass revision and token to hf_api by @juliendenize in #149
    • Fix assertion in test_convert_text_chunk and tool_call by @patrickvonplaten in #152
    • Pins GH actions by @arcanis in #160
    • Add usage restrictions regarding third-party rights. by @juliendenize in #161
    • Improve tekken logging message for vocabulary by @juliendenize in #162
    • Set version 1.8.6 by @juliendenize in #151

    New Contributors

    • @PrasanaaV made their first contribution in #143
    • @arcanis made their first contribution in #160

    Full Changelog

    v1.8.5...v1.8.6

    Original source Report a problem
  • Sep 11, 2025
    • Date parsed from source:
      Sep 11, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Dec 23, 2025

    Mistral Common by Mistral

    v1.8.5: Patch Release

    New Contributors

    • @mingfang made their first contribution in #135

    Full Changelog: v1.8.4...v1.8.5

    • Make model field optional in TranscriptionRequest by @juliendenize in #128
    • Remove all responses and embedding requests. Add transcription docs. by @juliendenize in #133
    • Add chunk file by @juliendenize in #129
    • allow message content to be empty string by @mingfang in #135
    • Add test empty content for AssistantMessage v7 by @juliendenize in #136
    • v1.8.5 by @juliendenize in #137
    Original source Report a problem
  • Aug 20, 2025
    • Date parsed from source:
      Aug 20, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.8.4: optional dependencies and allow random padding on ChatCompletionResponseStreamResponse

    Changelog

    • Update experimental.md by @juliendenize in #124
    • Make sentencepiece optional and refactor optional imports by @juliendenize in #126
    • Improve UX for contributing by @juliendenize in #127
    • feat: allow random padding on ChatCompletionResponseStreamResponse by @aac228 in #131

    New Contributors

    • @aac228 made their first contribution in #131

    Full Changelog: v1.8.3...v1.8.4

    Original source Report a problem
  • Jul 16, 2025
    • Date parsed from source:
      Jul 16, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Jan 29, 2026

    Mistral Common by Mistral

    v1.8.1: Add AudioURLChunk

    Mistral release adds AudioURLChunk support enabling HTTP(S) URLs, file paths, and base64 strings in content chunks for audio workflows. Includes usage example and notes for Voxtral-Mini 3B model; full changelog spans v1.8.0 to v1.8.1.

    Raw content

    Add AudioURLChunk by @juliendenize in #120

    Now you can use http(s) URLs, file paths and base64 string (without specifying format) in your content chunks thanks to AudioURLChunk !

    from mistral_common.protocol.instruct.messages import AudioURL, AudioURLChunk, TextChunk, UserMessage
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    repo_id = "mistralai/Voxtral-Mini-3B-2507"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    text_chunk = TextChunk(text="Wat do you think about this audio?")
    user_msg = UserMessage(content=[AudioURLChunk(audio_url=AudioURL(url="https://freewavesamples.com/files/Ouch-6.wav")), text_chunk])
    request = ChatCompletionRequest(messages=[user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.8.0...v1.8.1

    Original source Report a problem
  • Jul 15, 2025
    • Date parsed from source:
      Jul 15, 2025
    • First seen by Releasebot:
      Nov 14, 2025
    • Modified by Releasebot:
      Jan 9, 2026

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    A hands on audio chat demo tied to the Voxtral mini release range, showing how to feed audio chunks and text into a chat model from v1.7.0 to v1.8.0. The snippet doubles as a changelog cue and a concrete integration example for multi speaker prompts.

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example

    from mistral_common.protocol.instruct.messages import TextChunk, AudioChunk, UserMessage, AssistantMessage, RawAudio
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    import mistral_common.audio
    from huggingface_hub import hf_hub_download
    
    repo_id = "mistralai/voxtral-mini"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    obama_file = hf_hub_download("patrickvonplaten/audio_samples", "obama.mp3", repo_type = "dataset")
    bcn_file = hf_hub_download("patrickvonplaten/audio_samples", "bcn_weather.mp3", repo_type = "dataset")
    
    def file_to_chunk(file: str) -> AudioChunk:
        audio = Audio.from_file(file, strict = False)
        return AudioChunk.from_audio(audio)
    
    text_chunk = TextChunk(text="Which speaker do you prefer between the two? Why? How are they different from each other?")
    user_msg = UserMessage(content=[file_to_chunk(obama_file), file_to_chunk(bcn_file), text_chunk])
    request = ChatCompletionRequest(messages=[user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.7.0...v1.8.0

    Original source Report a problem
  • Jul 15, 2025
    • Date parsed from source:
      Jul 15, 2025
    • First seen by Releasebot:
      Oct 28, 2025
    • Modified by Releasebot:
      Nov 3, 2025

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example and Audio transcription example code snippets included in the release notes.

    Original source Report a problem

Related products