Mistral Common Release Notes

Last updated: Apr 1, 2026

  • Apr 1, 2026
    • Date parsed from source:
      Apr 1, 2026
    • First seen by Releasebot:
      Apr 1, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.11.0

    Mistral Common adds tag v1.11.0 for its public PyPI release.

    Adds tag v1.11.0 for public PyPI release

    Original source Report a problem
  • Mar 13, 2026
    • Date parsed from source:
      Mar 13, 2026
    • First seen by Releasebot:
      Mar 13, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.10.0: Tokenizer v15, Reasoning Effort and Python 3.14

    Mistral unveils Version 1.10.0 with new capabilities and improvements such as Python 3.14 support, speech request addition, and strict function calling, plus v15. Tests now use mocked HTTP responses and several contributors are noted. Full changelog covers v1.9.1 to v1.10.0.

    What's Changed

    • Allow System Prompt with Audio for v13 by @juliendenize in #184
    • test_audio: Replace live network calls in test_from_url with mocked HTTP responses by @framsouza in #188
    • fix: typo in serve command help text by @framsouza in #189
    • Add Python 3.14 support by @juliendenize in #195
    • test: mock remaining network call in test_encode_invalid_audio_url_chunk by @abdelhadi703 in #192
    • [Speech Request] Add speech request by @patrickvonplaten in #196
    • Add strict function calling support by @juliendenize in #197
    • Add v15 by @juliendenize in #199
    • Version 1.10.0 by @juliendenize in #200

    New Contributors

    • @framsouza made their first contribution in #188
    • @abdelhadi703 made their first contribution in #192

    Full Changelog: v1.9.1...v1.10.0

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from Mistral and hundreds of other software products.

  • Feb 12, 2026
    • Date parsed from source:
      Feb 12, 2026
    • First seen by Releasebot:
      Feb 12, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.9.1 Patch Release

    Refactor online streaming processing and allow for dynamic streaming delay

    What's Changed

    • Add AGENTS.md by @juliendenize in #182
    • fix: correct typos 'occurence' and 'recieved' by @thecaptain789 in #185
    • [Audio] Refactor streaming logic by @patrickvonplaten in #187

    New Contributors

    • @thecaptain789 made their first contribution in #185

    Full Changelog: v1.9.0...v1.9.1

    Original source Report a problem
  • Feb 3, 2026
    • Date parsed from source:
      Feb 3, 2026
    • First seen by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.9.0 - Stream my audio 🎙️

    Mistral-Common adds streaming audio processing and realtime transcription support with Voxtral Mini. The changelog highlights token and padding improvements, an audio encoder addition, accessibility tweaks, and releases up to v1.9.0 featuring audio streaming.

    Mistral-Common can now process streaming requests

    import numpy as np
    from mistral_common.audio import Audio
    from mistral_common.protocol.instruct.chunk import RawAudio
    from mistral_common.protocol.transcription.request import StreamingMode, TranscriptionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    # 1. Load the tokenizer with audio support
    tokenizer = MistralTokenizer.from_hf_hub("mistralai/Voxtral-Mini-4B-Realtime-2602")
    # 2. Create sample audio data (or load from a file)
    sampling_rate = 16_000
    duration_s = 2.0
    audio_array = np.random.uniform(-1, 1, size=int(duration_s * sampling_rate)).astype(np.float32)
    audio = Audio(audio_array=audio_array, sampling_rate=sampling_rate, format="wav")
    # 3. Create the streaming transcription request
    request = TranscriptionRequest(
        audio=RawAudio(data=audio.to_base64("wav"), format="wav"),
        streaming=StreamingMode.ONLINE,  # or StreamingMode.OFFLINE
        language=None,
    )
    # 4. Encode the request
    tokenized = tokenizer.encode_transcription(request)
    # 5. Access the results
    print(f"Tokens: {tokenized.tokens}")
    print(f"Number of tokens: {len(tokenized.tokens)}")
    print(f"Number of audio segments: {len(tokenized.audios)}")
    

    See https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602 for more info.

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173
    • Release 1.8.8 by @juliendenize in #174
    • [Audio] Update padding by @patrickvonplaten in #175
    • [Audio] Improve padding for streaming by @patrickvonplaten in #177
    • Add audio_encoder to Tokenizer V13 by @amosyou in #180
    • Release v1.9.0 - Audio streaming by @patrickvonplaten in #179
    • Fix image tests with downloads by @juliendenize in #181
    • Enhance accessibility by @juliendenize in #176

    New Contributors

    • @amosyou made their first contribution in #180

    Full Changelog: v1.8.7...v1.9.0

    Original source Report a problem
  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.8.8: Backward comp

    What's Changed

    • Add new token logic asrstr by @patrickvonplaten in #172
    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173

    Full Changelog: v1.8.7...v1.8.8

    Original source Report a problem
  • Dec 22, 2025
    • Date parsed from source:
      Dec 22, 2025
    • First seen by Releasebot:
      Dec 23, 2025
    • Modified by Releasebot:
      Feb 3, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.8.7: Refactoring and bug fixes.

    What's Changed

    • Remove the index field from assistant tool_calls. by @tobrun in #165
    • Rename get control -> to get special & add is_special by @patrickvonplaten in #164
    • Add TextChunk support to ToolMessage by @juliendenize in #170
    • Version 1.8.7 by @juliendenize in #171

    New Contributors

    • @tobrun made their first contribution in #165

    Full Changelog: v1.8.6...v1.8.7

    Original source Report a problem
  • Nov 30, 2025
    • Date parsed from source:
      Nov 30, 2025
    • First seen by Releasebot:
      Dec 1, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.6: rm Python 3.9, bug fixes.

    Version 1.8.6 arrives with cleanup and enhancements like new normalizer and validator utilities, token handling improvements, and stricter third party rights, plus updates to tests and logging. A full changelog signals a shipped release.

    Release notes

    • Remove deprecated imports in docs. by @juliendenize in #138
    • Add normalizer and validator utils by @juliendenize in #140
    • Refactor private aggregate messages for InstructRequestNormalizer by @juliendenize in #141
    • test: improve unit test for is_opencv_installed by @PrasanaaV in #143
    • Optimize spm decode function by @juliendenize in #144
    • Add get_one_valid_tokenizer_file by @juliendenize in #142
    • Remove Python 3.9 support by @juliendenize in #145
    • Correctly pass revision and token to hf_api by @juliendenize in #149
    • Fix assertion in test_convert_text_chunk and tool_call by @patrickvonplaten in #152
    • Pins GH actions by @arcanis in #160
    • Add usage restrictions regarding third-party rights. by @juliendenize in #161
    • Improve tekken logging message for vocabulary by @juliendenize in #162
    • Set version 1.8.6 by @juliendenize in #151

    New Contributors

    • @PrasanaaV made their first contribution in #143
    • @arcanis made their first contribution in #160

    Full Changelog

    v1.8.5...v1.8.6

    Original source Report a problem
  • Sep 11, 2025
    • Date parsed from source:
      Sep 11, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Dec 23, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.5: Patch Release

    New Contributors

    • @mingfang made their first contribution in #135

    Full Changelog: v1.8.4...v1.8.5

    • Make model field optional in TranscriptionRequest by @juliendenize in #128
    • Remove all responses and embedding requests. Add transcription docs. by @juliendenize in #133
    • Add chunk file by @juliendenize in #129
    • allow message content to be empty string by @mingfang in #135
    • Add test empty content for AssistantMessage v7 by @juliendenize in #136
    • v1.8.5 by @juliendenize in #137
    Original source Report a problem
  • Aug 20, 2025
    • Date parsed from source:
      Aug 20, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025
    Mistral logo

    Mistral Common by Mistral

    v1.8.4: optional dependencies and allow random padding on ChatCompletionResponseStreamResponse

    Changelog

    • Update experimental.md by @juliendenize in #124
    • Make sentencepiece optional and refactor optional imports by @juliendenize in #126
    • Improve UX for contributing by @juliendenize in #127
    • feat: allow random padding on ChatCompletionResponseStreamResponse by @aac228 in #131

    New Contributors

    • @aac228 made their first contribution in #131

    Full Changelog: v1.8.3...v1.8.4

    Original source Report a problem
  • Jul 16, 2025
    • Date parsed from source:
      Jul 16, 2025
    • First seen by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Jan 29, 2026
    Mistral logo

    Mistral Common by Mistral

    v1.8.1: Add AudioURLChunk

    Mistral release adds AudioURLChunk support enabling HTTP(S) URLs, file paths, and base64 strings in content chunks for audio workflows. Includes usage example and notes for Voxtral-Mini 3B model; full changelog spans v1.8.0 to v1.8.1.

    Raw content

    Add AudioURLChunk by @juliendenize in #120

    Now you can use http(s) URLs, file paths and base64 string (without specifying format) in your content chunks thanks to AudioURLChunk !

    from mistral_common.protocol.instruct.messages import AudioURL, AudioURLChunk, TextChunk, UserMessage
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    repo_id = "mistralai/Voxtral-Mini-3B-2507"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    text_chunk = TextChunk(text="Wat do you think about this audio?")
    user_msg = UserMessage(content=[AudioURLChunk(audio_url=AudioURL(url="https://freewavesamples.com/files/Ouch-6.wav")), text_chunk])
    request = ChatCompletionRequest(messages=[user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.8.0...v1.8.1

    Original source Report a problem

Related products