- Sep 25, 2025
- Parsed from source:Sep 25, 2025
- Detected by Releasebot:Oct 3, 2025
Expanding xAI for Government with GSA OneGov
xAI expands its For Government program, making frontier AI broadly available to all federal agencies at $0.42 for 18 months through GSA OneGov. New Grok 4 models arrive with a dedicated Grok Engineers team to accelerate responsible government deployment.
Expanding ‘xAI For Government’ with more accessible AI tools for the Federal Government
Bringing accessible Frontier AI to the entire US Government
We’re excited to announce a significant expansion to our ‘xAI For Government’ offering – which we launched earlier this year in partnership with the General Services Administration (GSA). As a part of GSA’s OneGov Initiative, starting today, for the next 18 months, xAI For Government will be available to every federal government department, agency, or bureau, for a cost of just $0.42 for the period.
This new initiative brings our latest reasoning models, like Grok 4 and Grok 4 fast, into the hands of the government at an extremely affordable price. xAI is committed to bringing the best tools available to those working hard for our country, and this initiative lets government employees engage, experiment, and harness Frontier AI.
xAI is a strong supporter of President Trump’s AI Action Plan and this administration’s initiatives to bring the latest AI tools into the hands of federal government workers. We’re the only lab that has firmly committed to maintaining parity between the models that are available in the commercial space and tools available in regulated government environments.
"xAI has the most powerful AI compute and most capable AI models in the world. Thanks to President Trump and his administration, xAI’s frontier AI is now unlocked for every federal agency empowering the U.S. Government to innovate faster and accomplish its mission more effectively than ever before” said xAI cofounder and CEO Elon Musk. “We look forward to continuing to work with President Trump and his team to rapidly deploy AI throughout the government for the benefit of the country.”
xAI’s is committed to bring Frontier AI models to the government, but we won’t stop there. For AI to be maximally effective in the government, it needs to be implemented correctly, and the personnel using it need to be trained to use new AI-powered tools. To support this, xAI is committing a dedicated team of ‘Grok Engineers’ to assist and accelerate departments and agencies implementing AI tools. This is a unique offering and xAI is investing heavily not just in making the models accessible, but also in ensuring mission success.
“Widespread access to advanced AI models is essential to building the efficient, accountable government that taxpayers deserve — and to fulfilling President Trump’s promise that America will win the global AI race," said Federal Acquisition Service Commissioner Josh Gruenbaum. “We value xAI for partnering with GSA—and dedicating engineers—to accelerate the adoption of Grok to transform government operations.”
America is the world leader in AI, and this is in no small part due to a tradition of innovation and strong investments in engineering and science. We’re excited to contribute back to the country that made xAI uniquely possible here.
Learn more here or reach out to us directly.
We’re also seeking talented mission driven engineers who want to join the cause. If you’re excited by solving hard problems to empower our nation’s hardest workers, we’d love to hear from you. Reach out to us over email or apply online directly here. We’d love to hear from you.
Original source Report a problem - Sep 19, 2025
- Parsed from source:Sep 19, 2025
- Detected by Releasebot:Oct 3, 2025
Grok 4 Fast
Grok 4 Fast launches as a cost-efficient, unified reasoning model with 2M token context, real-time web browsing, and native tool use. It’s available now across grok.com apps and API with two variants, offering 40% token efficiency and major price reductions.
Pushing the Frontier of Cost-Efficient Intelligence
We're thrilled to present Grok 4 Fast, our latest advancement in cost-efficient reasoning models. Built on xAI’s learnings from Grok 4, Grok 4 Fast delivers frontier-level performance across Enterprise and Consumer domains—with exceptional token efficiency. This model pushes the boundaries for smaller and faster AI, making high-quality reasoning accessible to more users and developers. Grok 4 Fast features state-of-the-art (SOTA) cost-efficiency, cutting-edge web and X search capabilities, a 2M token context window, and a unified architecture that blends reasoning and non-reasoning modes in one model.
Advancing Cost-Efficient Intelligence
Grok 4 Fast sets a new frontier in cost-efficient intelligence, outperforming Grok 3 Mini across reasoning benchmarks while slashing token costs.
Benchmark pass@1 comparison table with Grok 4 Fast, Grok 4, Grok 3 Mini (High), GPT-5 (High), GPT-5 Mini (High) on various benchmarks like GPQA Diamond, AIME 2025 (no tools), HMMT 2025 (no tools), HLE (no tools), LiveCodeBench (Jan-May).
We used large-scale reinforcement learning to maximize the intelligence density of Grok 4 Fast. In our evaluations, Grok 4 Fast achieves comparable performance to Grok 4 on benchmarks while using 40% fewer thinking tokens on average.
Intelligence Density
Maximum performance at minimum cost
This 40% increase in Grok 4 Fast's token efficiency, combined with a significantly lower price per token, results in a 98% reduction in price to achieve the same performance on frontier benchmarks as Grok 4. As verified by an independent review from Artificial Analysis, Grok 4 Fast exhibits a state-of-the-art (SOTA) price-to-intelligence ratio compared to other publicly available models on the Artificial Analysis Intelligence Index.
Native Tool Use with SOTA Search
Grok 4 Fast was trained end-to-end with tool-use reinforcement learning (RL). It excels at deciding when to invoke tools like code execution or web browsing.
For instance, Grok 4 Fast exhibits frontier agentic search capabilities, seamlessly browsing the web and X to augment queries with real-time data. It hops through links, ingests media (including images and videos on X), and synthesizes findings at light speed.
Benchmark pass@1 comparison table with Grok 4 Fast, Grok 4, Grok 3 (No Reasoning) on BrowseComp, SimpleQA, Reka Research Eval, BrowseComp (zh), X Bench Deepsearch (zh), X Browse*.
Frontier of General Post-training
Grok 4 Fast also establishes a new cost-effective frontier on general domain. We are excited to share Grok 4 Fast’s result on LMArena, where it has been privately battle-testing on the Search and Text Arenas.
In LMArena's Search Arena, grok-4-fast-search (code name: menlo) claims #1 with 1163 Elo — a commanding margin of 17 over o3-search. Its superior reasoning efficiency and intelligence density enable it to surpass much larger models on real-world, search-related tasks.
In LMArena's Text Arena, grok-4-fast (code name: tahoe) ranks #8, performing on par with grok-4-0709 and highlighting its remarkable intelligence density. Notably, it significantly outperforms peers in its weight class, where all comparable size models rank 18th or below.
Examples of Grok 4 Fast in action include detailed search and reasoning traces for queries like the maximum number of experience points possible in Path of Exile 2, showing its ability to browse, synthesize, and reason with real-time data.
Unified Model: Reasoning and Non-Reasoning
Previously, separate reasoning modes required distinct models. Grok 4 Fast introduces a unified architecture where reasoning (long chain-of-thought) and non-reasoning (quick responses) are handled by the same model weights, steered via system prompts. This unification reduces end-to-end latency as well as token costs, making Grok 4 Fast ideal for real-time applications.
In grok.com, this results in smooth transitions: responding instantly for simple queries or engaging in extended reasoning for complex ones. In the xAI API, developers can fine-tune this behavior, optimizing for speed or depth.
Grok 4 Fast in grok.com, iOS, and Android apps
Grok 4 Fast is available now for all users. In Fast and Auto modes, you will see a significant improvement in search and information seeking queries. Additionally, difficult queries in Auto mode will use Grok 4 Fast, which will provide a much faster experience without loss of quality. For the first time, all users, including free users, will have access to our latest model without restrictions, marking a step toward democratizing advanced AI.
Grok 4 Fast on OpenRouter, Vercel AI Gateway, and the xAI API
For a limited time, Grok 4 Fast will be available for free on OpenRouter and Vercel AI Gateway.
We're also rolling out Grok 4 Fast as two models: grok-4-fast-reasoning and grok-4-fast-non-reasoning, each with a 2M token context window. This allows developers to tune the amount of test-time compute applied to their use cases.
grok-4-fast-reasoning and grok-4-fast-non-reasoning are generally available via the xAI API according to the following pricing: Token Type | <128k tokens | ≥128k tokens Input tokens | $0.20 / 1M | $0.40 / 1M Output tokens | $0.50 / 1M | $1.00 / 1M Cached input tokens | $0.05 / 1M
What's Next
We will continuously ship model improvements to Grok 4 Fast based on your feedback on x.com. Stay tuned for further integrations, including enhanced multimodal capabilities and agentic features.
Read the Grok 4 Fast model card here.
That's all for now - so long, and thanks for all the fish!
Original source Report a problem - Sep 15, 2025
- Parsed from source:Sep 15, 2025
- Detected by Releasebot:Sep 17, 2025
- Modified by Releasebot:Oct 1, 2025
September 2025
- Aug 28, 2025
- Parsed from source:Aug 28, 2025
- Detected by Releasebot:Oct 3, 2025
Grok Code Fast 1
Introducing grok-code-fast-1, a fast, economical coding AI built for agentic workflows with tool integration and strong TypeScript, Python, Java, and more. It’s launching with limited-time free access through select partners, transparent pricing, and ongoing updates including multimodal inputs and longer context.
grok-code-fast-1
We're thrilled to introduce grok-code-fast-1, a speedy and economical reasoning model that excels at agentic coding.
A speedy daily driver
While today's models are undeniably powerful, they often don't feel purpose-built for agentic coding workflows, where loops of reasoning and tool calls can feel frustratingly slow. As heavy users of agentic coding tools, our engineers saw room for a more nimble, responsive solution optimized for our day-to-day tasks. We built grok-code-fast-1 from scratch, starting with a brand-new model architecture. To lay a robust foundation, we carefully assembled a pre-training corpus rich with programming-related content. For post-training, we curated high-quality datasets that reflect real-world pull requests and coding tasks. Throughout the training process, we collaborated closely with our launch partners to refine and sharpen the model’s behavior inside their agentic platforms. grok-code-fast-1 has mastered the use of common tools like grep, terminal, and file editing, and thus should feel right at home in your favorite IDE. We've teamed up with select launch partners to offer grok-code-fast-1 for free for a limited time, including GitHub Copilot, Cursor, Cline, Roo Code, Kilo Code, opencode, and Windsurf.
Blazing fast
Our inference and supercomputing teams developed several innovative techniques to dramatically accelerate our serving speed, creating a uniquely responsive experience where the model will have already called dozens of tools before you even finish reading the first paragraph of the thinking trace. We've also invested in prompt caching optimizations, regularly achieving cache hit rates above 90% when used with our launch partners.
A versatile programmer
grok-code-fast-1 is exceptionally versatile across the full software development stack and is particularly adept at TypeScript, Python, Java, Rust, C++, and Go. It can complete common programming tasks with minimal oversight, ranging from building zero-to-one projects and providing insightful answers to codebase questions to performing surgical bug fixes.
An economical choice
We designed grok-code-fast-1 to be widely accessible, priced at:
- $0.20 per million input tokens
- $1.50 per million output tokens
- $0.02 per million cached input tokens grok-code-fast-1 was crafted to shine in the tasks developers face every day, striking a compelling balance between performance and cost. Its strength lies in delivering strong performance in a economical, compact form factor, making it a versatile choice for tackling common coding tasks quickly and cost-effectively.
Model Performance
Tokens per Second vs Output Price
- Tokens per second (TPS): 190
- Output price / per 1M tokens: $18
Methodology
TPS metrics were calculated by directly measuring response generation speed via each model provider's API, considering only the final response tokens.
- Gemini 2.5 Pro, GPT-5, and Claude Sonnet 4: Measured using their respective public APIs.
- Grok Code Fast 1 and Grok 4: Measured using the xAI API.
- Qwen3-Coder: Hosted on DeepInfra at low precision (fp4), which reduces response quality.
We took a holistic approach to evaluating model performance, blending public benchmarks with real-world testing. On the full subset of SWE-Bench-Verified, grok-code-fast-1 scored 70.8% using our own internal harness. While benchmarks like SWE-Bench provide valuable insights, we've found they don't fully reflect the nuances of real-world software engineering, particularly the end-user experience in agentic coding workflows. To guide our model training, we pair these benchmarks with routine human assessments, where experienced developers rate the model's end-to-end performance on everyday tasks. We've also built automated evaluations to track key aspects of behavior, helping us balance trade-offs in design. When developing grok-code-fast-1, we focused on usability and user satisfaction, guided by real-world human evaluations. The result is a model rated by programmers as fast and reliable for everyday coding tasks.
Grok Code for everyone
For a limited time, we’re excited to offer grok-code-fast-1 for free on exclusive launch partners. Here’s what our launch partners had to say about our the model, which was recently released in stealth under the codename sonic.
Free for a limited time
We’re excited to offer Grok Code Fast 1 for free on exclusive launch partners.
"In early testing, Grok Code Fast has shown both its speed and quality in agentic coding tasks. Empowering developers with powerful tools is a core part of our mission at GitHub Copilot, and this is a compelling new option for our developers." Mario Rodriguez (@mariorod1) Chief Product Officer, GitHub
Prompt Engineering Guide
Our team has crafted a Prompt Engineering Guide with tips on how to get the best from grok-code-fast-1. The model is generally available via the xAI API, priced at $0.20 / 1M input tokens, $1.50 / 1M output tokens, and $0.02 / 1M cached input tokens.
What to expect in the next few weeks
Last week, we quietly released grok-code-fast-1 under the codename sonic. During this stealth phase, our team carefully monitored community channels and deployed multiple new model checkpoints to address feedback. As we advance this new model family, we're excited to iterate rapidly on your input. We highly value the developer community's support and encourage you to freely share all feedback, positive and negative. We'll focus on delivering consistent updates to grok-code-fast-1, with improvements arriving in days rather than weeks. A new variant that supports multimodal inputs, parallel tool calling, and extended context length is already in training. Read the grok-code-fast-1 model card here. We’re excited to see what you build!
Original source Report a problem - Aug 26, 2025
- Parsed from source:Aug 26, 2025
- Detected by Releasebot:Sep 17, 2025
- Modified by Releasebot:Oct 1, 2025
August 2025
We have released our first Code Model to be used with code editors.
Original source Report a problem - Aug 26, 2025
- Parsed from source:Aug 26, 2025
- Detected by Releasebot:Sep 2, 2025
Grok Code Fast 1 is released
Grok Code Fast 1, our first code model, is now available for use in code editors.
Grok Code Fast 1 is released
- We have released our first Code Model to be used with code editors.
- Aug 15, 2025
- Parsed from source:Aug 15, 2025
- Detected by Releasebot:Oct 1, 2025
Collections API is released
You can upload files, create embeddings, and use them for inference with our Collections API.
Original source Report a problem - Jul 14, 2025
- Parsed from source:Jul 14, 2025
- Detected by Releasebot:Oct 3, 2025
Announcing xAI for Government
xAI unveils xAI For Government, a frontier AI suite for US federal, state, and security customers featuring Grok 4 and tools like Deep Search. It includes custom gov models, USG‑cleared engineers, and DoD funding with GSA availability, signaling a major public rollout.
xAI For Government
Bringing frontier AI to the frontlines
Today, we’re proud to announce xAI For Government – a suite of frontier AI products available to United States Government customers.
xAI’s mission is to create and propagate AI tools to assist humanity in our quest for understanding and knowledge. Supporting the critical missions of the United States Government is a key part of this mission – bringing the best tools and technologies available in the commercial world to our hard-working public servants.
Americans have led the world through all of society’s great technological innovations, and AI will be no exception. xAI is proud to continue this legacy – which is why we are the only company building on this legacy here in the US and turning shovels into tokens.
Under the umbrella of xAI For Government, we will be bringing all of our world-class AI tools to federal, local, state, and national security customers. These customers will be able to use the Grok family of products to accelerate America – from making everyday government services faster and more efficient to using AI to address unsolved problems in fundamental science and technology.
This includes frontier AI like Grok 4, our latest and most advanced model so far, which brings strong reasoning capabilities with extensive pretraining models. Our government partnerships will also bring to bear tools like Deep Search, Tool Use, and more integrations – all of which are industry-leading commercial products.
We’ve been engaging closely with innovators and leaders in the government to make sure that our offerings are able to deliver the capabilities we need. In addition to our commercial offerings, we will be making some unique capabilities available to our government customers, including:
- Custom models for national security and critical science applications available to specific customers.
- Forward Deployed Engineering and Implementation Support, with USG cleared engineers
- Custom AI-powered applications to accelerate use cases in healthcare, fundamental science, and national security, to name a few examples
- Models soon available in classified and other restricted environments
- Partnerships with xAI to build custom versions for specific mission sets
We are especially excited to announce two important milestones for our US Government business – a new $200M ceiling contract with the US Department of Defense, alongside our products being available to purchase via the General Services Administration (GSA) schedule. This allows every federal government department, agency, or office, to access xAI's frontier AI products.
America is the world leader in AI, and this is in no small part due to a tradition of innovation and strong investments in engineering and science. We’re excited to contribute back to the country that made xAI uniquely possible here.
Learn more here or reach out to us directly.
We’re also seeking talented mission driven engineers who want to join the cause. If you’re excited by solving hard problems to empower our nation’s hardest workers, we’d love to hear from you. Reach out to us over email or apply online directly here. We’d love to hear from you.
Original source Report a problem - Jul 9, 2025
- Parsed from source:Jul 9, 2025
- Detected by Releasebot:Oct 3, 2025
Grok 4
Grok 4 arrives with native tool use, real‑time search, and live API access for SuperGrok, Premium+, and xAI. New Grok 4 Heavy, a powerful API, Voice Mode with live video, and ongoing reinforcement learning scaling redefine frontier AI capabilities.
Grok 4 is the most intelligent model in the world. It includes native tool use and real-time search integration, and is available now to SuperGrok and Premium+ subscribers, as well as through the xAI API. We are also introducing a new SuperGrok Heavy tier with access to Grok 4 Heavy - the most powerful version of Grok 4.
Scaling Up Reinforcement Learning
With Grok 3, we scaled next-token prediction pretraining to unprecedented levels, resulting in a model with unparalleled world knowledge and performance. We also introduced Grok 3 Reasoning, which was trained using reinforcement learning to think longer about problems and solve them with increased accuracy. During our work on Grok 3 Reasoning, we noticed scaling trends that suggested it would be possible to scale up our reinforcement learning training significantly.
For Grok 4, we utilized Colossus, our 200,000 GPU cluster, to run reinforcement learning training that refines Grok's reasoning abilities at pretraining scale. This was made possible with innovations throughout the stack, including new infrastructure and algorithmic work that increased the compute efficiency of our training by 6x, as well as a massive data collection effort, where we significantly expanded our verifiable training data from primarily math and coding data to many more domains. The resulting training run saw smooth performance gains while training on over an order of magnitude more compute than had been used previously.
Humanity's Last Exam
Deep expert-level benchmark at the frontier of human knowledge
- State of the art
- Full set (April 3, 2025) with Python and Internet tools
Native Tool Use
Grok 4 was trained with reinforcement learning to use tools. This allows Grok to augment its thinking with tools like a code interpreter and web browsing in situations that are usually challenging for large language models. When searching for real-time information or answering difficult research questions, Grok 4 chooses its own search queries, finding knowledge from across the web and diving as deeply as it needs to craft a high-quality response.
We also trained Grok to use powerful tools to find information from deep within X. Grok can use advanced keyword and semantic search tools and even view media to improve the quality of its answers.
Grok 4 Heavy
We have made further progress on parallel test-time compute, which allows Grok to consider multiple hypotheses at once. We call this model Grok 4 Heavy, and it sets a new standard for performance and reliability. Grok 4 Heavy saturates most academic benchmarks and is the first model to score 50% on Humanity's Last Exam, a benchmark "designed to be the final closed-ended academic benchmark of its kind."
Frontier Intelligence
Grok 4 represents a leap in frontier intelligence, setting a new state-of-the-art for closed models on ARC-AGI V2 with 15.9% (nearly double Opus's ~8.6%, +8pp over previous high). On the agentic Vending-Bench, it dominates with $4694.15 net worth and 4569 units sold (averages across 5 runs), vastly outpacing Claude Opus 4 ($2077.41, 1412 units), humans ($844.05, 344 units), and others. Grok 4 Heavy leads USAMO'25 with 61.9%, and is the first to score 50.7% on Humanity's Last Exam (text-only subset), demonstrating unparalleled capabilities in complex reasoning through scaled reinforcement learning and native tool use.
Grok 4 API
The Grok 4 API empowers developers with frontier-level multimodal understanding, a 256,000 context window, and advanced reasoning capabilities to tackle complex tasks across text and vision. It integrates real-time data search across X, the web, and various news sources via our newly launched live search API, enabling up-to-date, accurate responses powered by native tool use. With enterprise-grade security and compliance—including SOC 2 Type 2, GDPR, and CCPA certifications—the API ensures robust protection for sensitive applications. Grok 4 is coming soon to our hyperscaler partners, making it easier for enterprises to deploy at scale for innovative AI solutions.
Grok 4 Voice Mode
Speak with Grok in our upgraded Voice Mode, which features enhanced realism, responsiveness, and intelligence. We introduce a serene, brand-new voice and redesign conversations to make them even more natural.
And now, Grok can see what you see! Point your camera, speak right away, and Grok pulls live insights, analyzing your scene and responding to you in real-time from within the voice chat experience. We are proud to present this model trained in-house, with our state-of-the-art reinforcement learning framework and speech compression techniques.
Enable video during your voice chat and Grok will look at what it sees when talking to you.
What’s Next
xAI will continue scaling reinforcement learning to unprecedented levels, building on Grok 4's advancements to push the boundaries of artificial intelligence. We plan to expand the scope from verifiable rewards in controlled domains to tackling complex real-world problems, where models can learn and adapt in dynamic environments. Multimodal capabilities will see ongoing improvements, integrating vision, audio, and beyond for more intuitive interactions. Overall, our focus remains on making models smarter, faster, and more efficient, as we drive toward systems that truly understand and assist humanity in profound ways.
Original source Report a problem - Jul 9, 2025
- Parsed from source:Jul 9, 2025
- Detected by Releasebot:Sep 9, 2025
- Modified by Releasebot:Oct 1, 2025
July 2025