Mistral Common Release Notes

Last updated: Dec 23, 2025

  • Dec 22, 2025
    • Parsed from source:
      Dec 22, 2025
    • Detected by Releasebot:
      Dec 23, 2025

    Mistral Common by Mistral

    v1.8.8: Backward comp

    Full Changelog: v1.8.7...v1.8.8

    • Add new token logic asrstr by @patrickvonplaten in #172

    • [Backward comp] Still need the _control_tokens for vLLM by @patrickvonplaten in #173

    Original source Report a problem
  • Dec 22, 2025
    • Parsed from source:
      Dec 22, 2025
    • Detected by Releasebot:
      Dec 23, 2025

    Mistral Common by Mistral

    v1.8.7: Refactoring and bug fixes.

    Version 1.8.7

    • Remove the index field from assistant tool_calls. by @tobrun in #165
    • Rename get control -> to get special & add is_special by @patrickvonplaten in #164
    • Add TextChunk support to ToolMessage by @juliendenize in #170
    • Version 1.8.7 by @juliendenize in #171

    New Contributors

    • @tobrun made their first contribution in #165

    Full Changelog

    v1.8.6...v1.8.7

    Original source Report a problem
  • Nov 30, 2025
    • Parsed from source:
      Nov 30, 2025
    • Detected by Releasebot:
      Dec 1, 2025

    Mistral Common by Mistral

    v1.8.6: rm Python 3.9, bug fixes.

    Version 1.8.6 arrives with cleanup and enhancements like new normalizer and validator utilities, token handling improvements, and stricter third party rights, plus updates to tests and logging. A full changelog signals a shipped release.

    Release notes

    • Remove deprecated imports in docs. by @juliendenize in #138
    • Add normalizer and validator utils by @juliendenize in #140
    • Refactor private aggregate messages for InstructRequestNormalizer by @juliendenize in #141
    • test: improve unit test for is_opencv_installed by @PrasanaaV in #143
    • Optimize spm decode function by @juliendenize in #144
    • Add get_one_valid_tokenizer_file by @juliendenize in #142
    • Remove Python 3.9 support by @juliendenize in #145
    • Correctly pass revision and token to hf_api by @juliendenize in #149
    • Fix assertion in test_convert_text_chunk and tool_call by @patrickvonplaten in #152
    • Pins GH actions by @arcanis in #160
    • Add usage restrictions regarding third-party rights. by @juliendenize in #161
    • Improve tekken logging message for vocabulary by @juliendenize in #162
    • Set version 1.8.6 by @juliendenize in #151

    New Contributors

    • @PrasanaaV made their first contribution in #143
    • @arcanis made their first contribution in #160

    Full Changelog

    v1.8.5...v1.8.6

    Original source Report a problem
  • Sep 11, 2025
    • Parsed from source:
      Sep 11, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Dec 23, 2025

    Mistral Common by Mistral

    v1.8.5: Patch Release

    New Contributors

    • @mingfang made their first contribution in #135

    Full Changelog: v1.8.4...v1.8.5

    • Make model field optional in TranscriptionRequest by @juliendenize in #128
    • Remove all responses and embedding requests. Add transcription docs. by @juliendenize in #133
    • Add chunk file by @juliendenize in #129
    • allow message content to be empty string by @mingfang in #135
    • Add test empty content for AssistantMessage v7 by @juliendenize in #136
    • v1.8.5 by @juliendenize in #137
    Original source Report a problem
  • Aug 20, 2025
    • Parsed from source:
      Aug 20, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.8.4: optional dependencies and allow random padding on ChatCompletionResponseStreamResponse

    Changelog

    • Update experimental.md by @juliendenize in #124
    • Make sentencepiece optional and refactor optional imports by @juliendenize in #126
    • Improve UX for contributing by @juliendenize in #127
    • feat: allow random padding on ChatCompletionResponseStreamResponse by @aac228 in #131

    New Contributors

    • @aac228 made their first contribution in #131

    Full Changelog: v1.8.3...v1.8.4

    Original source Report a problem
  • Jul 16, 2025
    • Parsed from source:
      Jul 16, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Jan 1, 2026

    Mistral Common by Mistral

    v1.8.1: Add AudioURLChunk

    New AudioURLChunk enables using http(s) URLs, file paths, or base64 strings in content chunks for audio input, broadening how audio is fed into the system. The notes include a tokenizer workflow and end-to-end chat example, signaling a concrete release upgrade from v1.8.0 to v1.8.1.

    Add AudioURLChunk by @juliendenize in #120

    Now you can use http(s) URLs, file paths and base64 string (without specifying format) in your content chunks thanks to AudioURLChunk !

    from mistral_common.protocol.instruct.messages import AudioURL, AudioURLChunk, TextChunk, UserMessage
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    repo_id = "mistralai/Voxtral-Mini-3B-2507"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    text_chunk = TextChunk(text="Wat do you think about this audio?")
    user_msg = UserMessage(content=[AudioURLChunk(audio_url=AudioURL(url="https://freewavesamples.com/files/Ouch-6.wav")), text_chunk])
    request = ChatCompletionRequest(messages=[user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.8.0...v1.8.1

    Original source Report a problem
  • Jul 15, 2025
    • Parsed from source:
      Jul 15, 2025
    • Detected by Releasebot:
      Oct 28, 2025
    • Modified by Releasebot:
      Nov 3, 2025

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example and Audio transcription example code snippets included in the release notes.

    Original source Report a problem
  • Jul 15, 2025
    • Parsed from source:
      Jul 15, 2025
    • Detected by Releasebot:
      Nov 14, 2025
    • Modified by Releasebot:
      Jan 1, 2026

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    Release notes highlight audio chat and transcription demos for Voxtral mini from v1.7.0 to v1.8.0. The snippet shows adding audio, tokenization, and transcription workflows using Mistral tools and sample voices by Patrick von Platen.

    Release notes

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example

    from mistral_common.protocol.instruct.messages import TextChunk, AudioChunk, UserMessage, AssistantMessage, RawAudio
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    import mistral_common.audio
    from huggingface_hub import hf_hub_download
    
    repo_id = "mistralai/voxtral-mini"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    obama_file = hf_hub_download("patrickvonplaten/audio_samples", "obama.mp3", repo_type = "dataset")
    bcn_file = hf_hub_download("patrickvonplaten/audio_samples", "bcn_weather.mp3", repo_type = "dataset")
    
    def file_to_chunk(file: str) -> AudioChunk:
        audio = Audio.from_file(file, strict = False)
        return AudioChunk.from_audio(audio)
    
    text_chunk = TextChunk(text="Which speaker do you prefer between the two? Why? How are they different from each other?")
    user_msg = UserMessage(content=[file_to_chunk(obama_file), file_to_chunk(bcn_file), text_chunk]).to_openai()
    request = ChatCompletionRequest(messages=[user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Audio transcription example

    from mistral_common.protocol.transcription.request import TranscriptionRequest
    import RawAudio
    
    obama_file = hf_hub_download("patrickvonplaten/audio_samples", "obama.mp3", repo_type = "dataset")
    
    audio = Audio.from_file(obama_file, strict = False)
    audio = RawAudio.from_audio(audio)
    
    request = TranscriptionRequest(model = repo_id, audio = audio, language = "en")
    tokenized = tokenizer.encode_transcription(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.7.0...v1.8.0

    Original source Report a problem
  • Jul 10, 2025
    • Parsed from source:
      Jul 10, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.6.3 - Improved from_hf_hub, support multiprocessing, ...

    The release rocks big improvements to HF hub integration, build system reliability, and decoding, with UTF-8 handling and several string and error formatting fixes. It also highlights new contributors and a full changelog sweep from v1.6.0 to v1.6.3, signaling solid product updates.

    Release notes

    • Improve hf hub support by @juliendenize in #95
    • Fix the Python badge by @juliendenize in #96
    • [Build system] Ensure UV reads more than just py files by @patrickvonplaten in #97
    • Update images.md by @juliendenize in #98
    • Improve decode and deprecate to_string by @juliendenize in #99
    • Fix string formatting for ConnectionError by @gaby in #101
    • Fix string formatting for NotImplementedError() by @gaby in #103
    • Fix error message instructions in transform_image() by @gaby in #102
    • Fix spelling issues across repo by @gaby in #107
    • Improve integration with HF by @juliendenize in #104
    • Opening tekkenizer file with utf-8 and remove deprecation warning by @juliendenize in #110
    • fix: multiprocessing pickle error with tokenizer by @NanoCode012 in #111

    New Contributors

    • @gaby made their first contribution in #101
    • @NanoCode012 made their first contribution in #111

    Full Changelog: v1.6.0...v1.6.3

    Original source Report a problem
  • Jul 10, 2025
    • Parsed from source:
      Jul 10, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Dec 23, 2025

    Mistral Common by Mistral

    v1.7.0 - v13 instruct tokenizer, rename multi-modal to image

    Release notes

    • [Naming] Rename multi-modal to image by @patrickvonplaten in #114
    • Add v13 Tokenizer by @juliendenize in #116
    • 1.7.0 Release by @patrickvonplaten in #118

    Full Changelog: v1.6.3...v1.7.0

    Original source Report a problem

Related products