Z.AI Release Notes
Last updated: Feb 20, 2026
- Feb 12, 2026
- Date parsed from source:Feb 12, 2026
- First seen by Releasebot:Feb 20, 2026
GLM-5
Designed for complex system engineering and long-range Agent tasks, GLM-5 shifts the paradigm from coding to engineering, demonstrating strong deep-reasoning performance in backend architecture, complex algorithms, and stubborn bug fixing.
Original source Report a problem
It directly benchmarks against Claude Opus 4.5 in code-logic density and systems-engineering capability, and integrates DeepSeek Sparse Attention for higher token efficiency while preserving long-context quality. Learn more in our documentation.* - Feb 3, 2026
- Date parsed from source:Feb 3, 2026
- First seen by Releasebot:Feb 3, 2026
GLM-OCR
We've launched GLM-OCR, a compact and high-performance optical character recognition model powered by the self-developed CogViT and GLM-0.5B encoder-decoder architecture, enabling efficient cross-modal alignment through its dedicated connection layer.
The update leverages CLIP pre-training on billions of image-text pairs to deliver robust visual semantic understanding and key token extraction capabilities, while maintaining a lightweight design for fast inference. Learn more in our documentation.*
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Z.AI and hundreds of other software products.
- Dec 22, 2025
- Date parsed from source:Dec 22, 2025
- First seen by Releasebot:Dec 23, 2025
GLM-4.7
GLM-4.7 Release Notes
We’ve released GLM-4.7, our latest flagship foundation model with significant improvements in coding, reasoning, and agentic capabilities. It delivers more reliable code generation, stronger long-context understanding, and improved end-to-end task execution across real-world development workflows.
The update brings open-source SOTA performance on major coding and reasoning benchmarks, enhanced agentic coding for goal-driven, multi-step tasks, and improved front-end and document generation quality. Learn more in our documentation.*
Original source Report a problem - Dec 11, 2025
- Date parsed from source:Dec 11, 2025
- First seen by Releasebot:Dec 19, 2025
AutoGLM-Phone-Multilingual
AutoGLM-Phone-Multilingual Release
We’ve launched AutoGLM-Phone-Multilingual, our latest multimodal mobile automation framework that understands screen content and executes real actions through ADB. It enables natural-language task execution across 50+ mainstream apps, delivering true end-to-end mobile control.
Original source Report a problem
The update introduces multilingual support (English & Chinese), enhanced workflow planning capabilities, and improved task execution reliability. Learn more in our documentation.* - Dec 10, 2025
- Date parsed from source:Dec 10, 2025
- First seen by Releasebot:Dec 19, 2025
GLM-ASR-2512
We’ve launched GLM-ASR-2512, our ASR model, delivering industry-leading accuracy with a Character Error Rate of just 0.0717, and significantly improved performance across real-world multilingual and accent-rich scenarios.
The update introduces enhanced custom dictionary support and expanded specialized terminology recognition. Learn more in our documentation.*
Original source Report a problem - Dec 8, 2025
- Date parsed from source:Dec 8, 2025
- First seen by Releasebot:Dec 19, 2025
GLM-4.6V
We’re excited to introduce GLM-4.6V, Z.ai’s latest iteration in multimodal large language models. This version enhances vision understanding, achieving state-of-the-art performance in tasks involving images and text.
The update also expands the context window to 128K, enabling more efficient processing of long inputs and complex multimodal tasks. Learn more in our documentation.*
Original source Report a problem - Sep 30, 2025
- Date parsed from source:Sep 30, 2025
- First seen by Releasebot:Dec 19, 2025
GLM-4.6
GLM-4.6 release
We’ve launched GLM-4.6, the flagship coding model, showcasing enhanced performance in both public benchmarks and real-world programming tasks, making it the leading coding model in China.
The update also expands the context window to 200K, improving its ability to handle longer code and complex agent tasks. Learn more in our documentation.*
Original source Report a problem - Aug 11, 2025
- Date parsed from source:Aug 11, 2025
- First seen by Releasebot:Dec 19, 2025
GLM-4.5V
GLM-4.5V Release
We’ve launched GLM-4.5V, a 100B-scale open-source vision reasoning model, supporting a broad range of visual tasks including video understanding, visual grounding, GUI agents and etc. The update also adds a new thinking mode. Learn more in our documentation.*
Original source Report a problem - Aug 8, 2025
- Date parsed from source:Aug 8, 2025
- First seen by Releasebot:Dec 19, 2025
GLM Slide/Poster Agent(beta)
GLM Slide/Poster Agent
We’ve launched GLM Slide/Poster Agent, an AI-powered creation agent that combines information retrieval, content structuring, and visual layout design to generate professional-grade slides and posters from natural language instructions. The update also brings a seamless integration of content generation with design conventions. Learn more in our documentation.*
Original source Report a problem - Jul 28, 2025
- Date parsed from source:Jul 28, 2025
- First seen by Releasebot:Dec 19, 2025
GLM-4.5 Series
We’ve launched GLM-4.5, our latest native agentic LLM, delivering doubled parameter efficiency and strong reasoning, coding, and agentic capabilities. It also offers seamless one-click compatibility with the Claude Code framework. Learn more in our documentation.*
Original source Report a problem