Mistral Common Release Notes

Last updated: Oct 26, 2025

  • Sep 12, 2025
    • Parsed from source:
      Sep 12, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.8.5: Patch Release

    • Make model field optional in TranscriptionRequest by @juliendenize in #128
    • Remove all responses and embedding requests. Add transcription docs. by @juliendenize in #133
    • Add chunk file by @juliendenize in #129
    • allow message content to be empty string by @mingfang in #135
    • Add test empty content for AssistantMessage v7 by @juliendenize in #136
    • v1.8.5 by @juliendenize in #137

    New Contributors

    • @mingfang made their first contribution in #135

    Full Changelog: v1.8.4...v1.8.5

    Original source Report a problem
  • Aug 20, 2025
    • Parsed from source:
      Aug 20, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.8.4: optional dependencies and allow random padding on ChatCompletionResponseStreamResponse

    Changelog

    • Update experimental.md by @juliendenize in #124
    • Make sentencepiece optional and refactor optional imports by @juliendenize in #126
    • Improve UX for contributing by @juliendenize in #127
    • feat: allow random padding on ChatCompletionResponseStreamResponse by @aac228 in #131

    New Contributors

    • @aac228 made their first contribution in #131

    Full Changelog: v1.8.3...v1.8.4

    Original source Report a problem
  • Jul 16, 2025
    • Parsed from source:
      Jul 16, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 11, 2025

    Mistral Common by Mistral

    v1.8.1: Add AudioURLChunk

    Mistral Voxtral Mini adds AudioURLChunk support so you can embed audio via URLs, file paths, or base64 directly in content chunks. The update includes example usage and tokenizer workflow, reflected in the v1.8.0 to v1.8.1 changelog sweep.

    Add AudioURLChunk by @juliendenize in #120

    Now you can use http(s) URLs, file paths and base64 string (without specifying format) in your content chunks thanks to AudioURLChunk !

    from mistral_common.protocol.instruct.messages import AudioURL, AudioURLChunk, TextChunk, UserMessage
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    
    repo_id = "mistralai/Voxtral-Mini-3B-2507"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    
    text_chunk = TextChunk(text = "Wat do you think about this audio?")
    user_msg = UserMessage(content = [AudioURLChunk(audio_url = AudioURL(url = "https://freewavesamples.com/files/Ouch-6.wav")), text_chunk])
    
    request = ChatCompletionRequest(messages = [user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.8.0...v1.8.1

    Original source Report a problem
  • Jul 15, 2025
    • Parsed from source:
      Jul 15, 2025
    • Detected by Releasebot:
      Nov 14, 2025

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    New audio chat and transcription workflows demonstrated for Voxtral-mini, with a full changelog covering updates from v1.7.0 to v1.8.0. It highlights audio sample usage and tokenization flows for end-to-end audio processing.

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example:

    from mistral_common.protocol.instruct.messages import TextChunk, AudioChunk, UserMessage, AssistantMessage, RawAudio
    from mistral_common.protocol.instruct.request import ChatCompletionRequest
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    from mistral_common.audio import Audio
    from huggingface_hub import hf_hub_download
    repo_id = "mistralai/voxtral-mini"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    obama_file = hf_hub_download("patrickvonplaten/audio_samples", "obama.mp3", repo_type = "dataset")
    bcn_file = hf_hub_download("patrickvonplaten/audio_samples", "bcn_weather.mp3", repo_type = "dataset")
    
    def file_to_chunk(file: str) -> AudioChunk:
        audio = Audio.from_file(file, strict = False)
        return AudioChunk.from_audio(audio)
    
    text_chunk = TextChunk(text = "Which speaker do you prefer between the two? Why? How are they different from each other?")
    user_msg = UserMessage(content = [file_to_chunk(obama_file), file_to_chunk(bcn_file), text_chunk]).to_openai()
    request = ChatCompletionRequest(messages = [user_msg])
    tokenized = tokenizer.encode_chat_completion(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Audio transcription example:

    from mistral_common.protocol.transcription.request import TranscriptionRequest
    from mistral_common.protocol.instruct.messages import RawAudio
    from mistral_common.audio import Audio
    from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
    from huggingface_hub import hf_hub_download
    repo_id = "mistralai/voxtral-mini"
    tokenizer = MistralTokenizer.from_hf_hub(repo_id)
    obama_file = hf_hub_download("patrickvonplaten/audio_samples", "obama.mp3", repo_type = "dataset")
    
    audio = Audio.from_file(obama_file, strict = False)
    audio = RawAudio.from_audio(audio)
    request = TranscriptionRequest(model = repo_id, audio = audio, language = "en")
    tokenized = tokenizer.encode_transcription(request)
    # pass tokenized.tokens to your favorite audio model
    print(tokenized.tokens)
    print(tokenized.audios)
    # print text to visually see tokens
    print(tokenized.text)
    

    Full Changelog: v1.7.0...v1.8.0

    Original source Report a problem
  • Jul 15, 2025
    • Parsed from source:
      Jul 15, 2025
    • Detected by Releasebot:
      Oct 28, 2025
    • Modified by Releasebot:
      Nov 3, 2025

    Mistral Common by Mistral

    v1.8.0 - Mistral welcomes 📢

    [Audio] Add audio by @patrickvonplaten in #119

    Full Changelog: v1.7.0...v1.8.0

    Audio chat example and Audio transcription example code snippets included in the release notes.

    Original source Report a problem
  • Jul 10, 2025
    • Parsed from source:
      Jul 10, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.6.3 - Improved from_hf_hub, support multiprocessing, ...

    The release rocks big improvements to HF hub integration, build system reliability, and decoding, with UTF-8 handling and several string and error formatting fixes. It also highlights new contributors and a full changelog sweep from v1.6.0 to v1.6.3, signaling solid product updates.

    Release notes

    • Improve hf hub support by @juliendenize in #95
    • Fix the Python badge by @juliendenize in #96
    • [Build system] Ensure UV reads more than just py files by @patrickvonplaten in #97
    • Update images.md by @juliendenize in #98
    • Improve decode and deprecate to_string by @juliendenize in #99
    • Fix string formatting for ConnectionError by @gaby in #101
    • Fix string formatting for NotImplementedError() by @gaby in #103
    • Fix error message instructions in transform_image() by @gaby in #102
    • Fix spelling issues across repo by @gaby in #107
    • Improve integration with HF by @juliendenize in #104
    • Opening tekkenizer file with utf-8 and remove deprecation warning by @juliendenize in #110
    • fix: multiprocessing pickle error with tokenizer by @NanoCode012 in #111

    New Contributors

    • @gaby made their first contribution in #101
    • @NanoCode012 made their first contribution in #111

    Full Changelog: v1.6.0...v1.6.3

    Original source Report a problem
  • Jun 12, 2025
    • Parsed from source:
      Jun 12, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 7, 2025

    Mistral Common by Mistral

    Patch release: v1.6.2

    Ensure that pypi version includes tokenizer files.

    See: [BUG: data directory not installed for 1.6.0]

    Original source Report a problem
  • Jun 9, 2025
    • Parsed from source:
      Jun 9, 2025
    • Detected by Releasebot:
      Oct 26, 2025
    • Modified by Releasebot:
      Nov 9, 2025

    Mistral Common by Mistral

    v1.6.0 - v11 instruct tokenizer, new docs, from_hf_hub, ...

    New Mistral release adds better tokenizer mapping, OpenAI chat format support, and file:// URI handling. Documentation polish plus HF hub API download tweaks streamline usage, with the from_hf_hub method added and new contributors highlighted.

    Release notes

    • Constantize model to tokenizer mapping (for external import) by @djsaunde in #86
    • Improve the documentation by @juliendenize in #87
    • Fix the documentation URL by @juliendenize in #88
    • Add support for file:// protocol URIs. by @sjuxax in #85
    • Add check for uv lock by @juliendenize in #89
    • Add the from_hf_hub method to MistralTokenizer by @juliendenize in #90
    • Add support to the openai format for Chat completions by @juliendenize in #92
    • V11 Instruct by @patrickvonplaten in #91
    • Use HF hub API for download by @juliendenize in #93

    New Contributors

    • @djsaunde made their first contribution in #86
    • @sjuxax made their first contribution in #85

    Full Changelog: v1.5.6...v1.6.0

    Original source Report a problem
  • Jul 25, 2023
    • Parsed from source:
      Jul 25, 2023
    • Detected by Releasebot:
      Oct 26, 2025

    Mistral Common by Mistral

    v1.8.3: Add an experimental REST API

    A new experimental REST API powered by FastAPI enables tokenization through generation to detokenization, with setup instructions and Swagger docs. Install mistral-common[server], run the server, and access the API via the Swagger UI.

    Add a FastAPI app by @juliendenize in #113

    We released an experimental REST API leveraging Fast API to handle requests from tokenization, through generation via calls to an engine, to detokenization.

    For a detailed documentation see [https://mistralai.github.io/mistral-common/usage/experimental/].

    Here is how to launch the server:

    pip install mistral-common[server]
    
    mistral_common serve mistralai/Magistral-Small-2507 \
    --host 127.0.0.1 --port 8000 \
    --engine-url http://127.0.0.1:8080 --engine-backend llama_cpp \
    --timeout 60
    

    Then you can see the Swagger at:
    http://localhost:8000.

    Full Changelog: v1.8.2...v1.8.3

    Original source Report a problem
  • Jul 24, 2023
    • Parsed from source:
      Jul 24, 2023
    • Detected by Releasebot:
      Oct 26, 2025

    Mistral Common by Mistral

    v1.8.2: Add ThinkChunk

    New in this release: ThinkChunk and TextChunk can now be used inside SystemMessage and AssistantMessage, enabling explicit reasoning chunks in prompts. Includes example code and a changelog from v1.8.1 to v1.8.2.

    Raw content

    Add think chunk by @juliendenize in #122

    Now you can use TextChunk and ThinkChunk in your SystemMessage or AssistantMessage :

    from mistral_common.protocol.instruct.messages import SystemMessage, TextChunk, ThinkChunk
    system_message = SystemMessage(content=[TextChunk(text="First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:"), ThinkChunk(thinking="Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.", closed=True), TextChunk(text="Here, provide a self-contained response.")])
    

    Full Changelog

    Full Changelog: v1.8.1...v1.8.2

    Original source Report a problem

Related products