Netlify Release Notes
Last updated: Mar 8, 2026
- Mar 6, 2026
- Date parsed from source:Mar 6, 2026
- First seen by Releasebot:Mar 8, 2026
Limit AI usage on your team
Netlify unveils team level AI inference limits that cap credits and automatically curb agent runs and AI Gateway usage when the defined budget is hit. It enforces the cap across the team and adds per-task cost visibility on the agent runs page for predictable monthly spending.
Team Owners can now set a credit limit on AI inference usage to keep Agent Runners and AI Gateway costs within budget.
Team Owners can now set a credit limit on AI inference usage to keep Agent Runners and AI Gateway costs within budget.
When your team’s usage hits the credit cap you define, active agent runs stop, new agent runs are blocked, and continued AI Gateway usage is paused to help you keep more of your credit balance.
This is especially useful for teams actively using AI features who want predictable monthly costs without manually watching the meter. Set it once, and Netlify enforces it automatically across your entire team.
Learn more in our docs on limiting AI features.
Agent run credits tracking
You can also track how much each agent run task costs on your agent runs page, shown next to how long the agent took to run the task.
Learn more about AI inference usage and how credits work.
Original source Report a problem - Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 7, 2026
Limit AI usage on your team
Netlify rolls out team-wide AI inference credit limits that automatically cap costs by stopping agent runs, blocking new runs, and pausing AI Gateway when the cap is hit. It shows per‑task costs next to run times and helps teams predict monthly spend with automatic enforcement across the team.
Team Owners can now set a credit limit on AI inference usage to keep Agent Runners and AI Gateway costs within budget.
When your team’s usage hits the credit cap you define, active agent runs stop, new agent runs are blocked, and continued AI Gateway usage is paused to help you keep more of your credit balance.
This is especially useful for teams actively using AI features who want predictable monthly costs without manually watching the meter. Set it once, and Netlify enforces it automatically across your entire team.
Learn more in our docs on limiting AI features.
Agent run credits tracking
You can also track how much each agent run task costs on your agent runs page, shown next to how long the agent took to run the task.
Learn more
Learn more about AI inference usage and how credits work.
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Netlify and hundreds of other software products.
- Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 6, 2026
OpenAI GPT-5.4 and GPT-5.4 Pro Now Available in AI Gateway and Agent Runners
Netlify announces that OpenAI GPT-5.4 and GPT-5.4 Pro are now available through the AI Gateway and Agent Runners with zero configuration. Developers can use the OpenAI SDK directly in Netlify Functions without managing API keys, while the AI Gateway handles authentication, caching, and rate limiting.
OpenAI’s GPT-5.4 and GPT-5.4 Pro models
OpenAI’s GPT-5.4 and GPT-5.4 Pro models are now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.4 model:
import OpenAI from 'openai'; export default async () => { const openai = new OpenAI(); const response = await openai.responses.create({ model: 'gpt-5.4', input: 'Give a concise explanation of how AI works.', }); return Response.json(response); };GPT-5.4 and GPT-5.4 Pro are available for all Function types and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
Original source Report a problem - Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 6, 2026
OpenAI GPT-5.4 and GPT-5.4 Pro Now Available in AI Gateway and Agent Runners
Netlify announces GPT-5.4 and GPT-5.4 Pro are now accessible via AI Gateway and Agent Runners with zero configuration. OpenAI SDK can be used in Netlify Functions without managing API keys, while Netlify handles caching, rate limiting, and authentication automatically. Available for all Function types and Agent Runners.
OpenAI’s GPT-5.4 and GPT-5.4 Pro availability
OpenAI’s GPT-5.4 and GPT-5.4 Pro models are now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.4 model:
import OpenAI from 'openai'; export default async () => { const openai = new OpenAI(); const response = await openai.responses.create({ model: 'gpt-5.4', input: 'Give a concise explanation of how AI works.', }); return Response.json(response); };GPT-5.4 and GPT-5.4 Pro are available for all Function types and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
Original source Report a problem - Mar 3, 2026
- Date parsed from source:Mar 3, 2026
- First seen by Releasebot:Mar 4, 2026
GPT-5.3 Instant now available in AI Gateway
Netlify launches GPT-5.3 Instant via AI Gateway with zero configuration, letting you use the OpenAI SDK directly in Netlify Functions without keys. The gateway handles auth, caching, and rate limits for instant access.
OpenAI’s GPT-5.3 Instant model is now available through Netlify’s AI Gateway with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.3 Instant model:
import OpenAI from 'openai'; export default async () => { const openai = new OpenAI(); const response = await openai.responses.create({ model: 'gpt-5.3-chat-latest', input: 'How does AI work?' }); return Response.json(response); };Note: The model API name is gpt-5.3-chat-latest.
GPT-5.3 Instant is available for all Function types. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
Original source Report a problem - Mar 3, 2026
- Date parsed from source:Mar 3, 2026
- First seen by Releasebot:Mar 4, 2026
Deploy Preview screenshots in agent runs
New: completed agent runs now show a Deploy Preview screenshot for quick results. Easily track agent run sessions from your Netlify dashboard without opening the full preview. Try it today in your project by going to Agent runs and starting a run.
Starting today, all completed agent runs show a screenshot of your Deploy Preview. This makes it easier to quickly see the result of an agent run and keep track of agent run sessions without opening the full preview.
Test it out today:
- Go to your Netlify project dashboard.
- On the left, select Agent runs, then choose an existing agent run or start a new one by entering a prompt and selecting Run task.
- At the bottom of your agent run sessions, you’ll find a screenshot of your Deploy Preview. The screenshot is taken from the main page of your project.
Note:
If you’ve set up private deploys or password protection, the screenshot will show a sign-in page instead. Learn more about Password protection.
Learn more about getting started with Agent Runners.
Original source Report a problem - Mar 3, 2026
- Date parsed from source:Mar 3, 2026
- First seen by Releasebot:Mar 3, 2026
Gemini 3.1 Flash Lite Preview now available in AI Gateway
Google Gemini 3.1 Flash Lite Preview lands via AI Gateway, enabling easy use from Netlify Functions without API key setup. It works across function types and pairs with Netlify features for configurable caching and rate limiting.
Google’s Gemini 3.1 Flash Lite Preview is now available through AI Gateway
You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to Google for you.
Example usage in a Function:
import { GoogleGenAI } from '@google/genai'; export default async () => { const ai = new GoogleGenAI(); const response = await ai.models.generateContent({ model: 'gemini-3.1-flash-lite-preview', contents: 'How can AI improve my coding?' }); return Response.json(response); };This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.
Learn more in the AI Gateway documentation.
Original source Report a problem - Feb 27, 2026
- Date parsed from source:Feb 27, 2026
- First seen by Releasebot:Feb 28, 2026
Use Netlify Agent Runners from Linear
Netlify Agent Runners now integrate with Linear, letting users launch agent runs directly from any Linear issue and include synced Slack context. Pick Claude Code, Gemini, or OpenAI Codex, edit prompts, and run tasks from Linear for faster, clearer collaboration.
Netlify Agent Runners for Linear
Linear users can now launch Netlify Agent Runners directly from any Linear issue, allowing you to seamlessly share context with your AI agent of choice. If you have your Linear issue synced with related Slack messages, this context will also be included in your agent run prompt.
Before starting your agent run, you can review and edit your prompt. Next, you can choose which AI agent to use — Claude Code, Google Gemini, or OpenAI Codex. Netlify Agent Runners doesn’t lock you into using a single AI agent so you can pick the agent that fits the task best.
To start an agent run from Linear:
- Go to a Linear issue where you want to trigger an agent run.
- In the top right corner, select Configure coding tools… .
- Toggle Netlify Agent Runners on.
- Go back to the issue and in the top right corner, select Open in Netlify Agent Runners .
- Review the prompt and choose your AI agent.
- To start the agent run, select Run task .
Once you’ve enabled this integration from your personal Linear preference settings, any Linear issue you open in your workspace will give you the option to open with Netlify Agent Runners.
Now your entire team can save time and seamlessly share context between Linear and Netlify Agent Runners while keeping this work clearly tracked across Linear and Netlify. Learn more about Agent Runners.
Original source Report a problem - Feb 27, 2026
- Date parsed from source:Feb 27, 2026
- First seen by Releasebot:Feb 28, 2026
Support for stale-while-revalidate in Cache API
Netlify Cache API gains full stale-while-revalidate support with automatic background revalidation for fetchWithCache. A new needsRevalidation API lets you check cache staleness and trigger background revalidation when using cache.match and cache.put.
SWR support in the Netlify Cache API
The Netlify Cache API now has full support for stale-while-revalidate (SWR). This was a previous limitation of the Cache API that has now been lifted, thanks to a request from a customer.
When using fetchWithCache with the swr option, background revalidation is handled automatically. If a response is stale but still within the SWR window, it’s served immediately while a fresh response is fetched and cached in the background.
import { fetchWithCache, DAY, HOUR } from "@netlify/cache"; import type { Config, Context } from "@netlify/functions"; export default async (req: Request, context: Context) => { const response = await fetchWithCache("https://example.com/expensive-api", { ttl: 2 * DAY, swr: HOUR, tags: ["product"], }); return response; }; export const config: Config = { path: "/api/products", };For users who interact directly with cache.match and cache.put, a new needsRevalidation method lets you check whether a cached response is stale and trigger background revalidation manually:
import { needsRevalidation, cacheHeaders, MINUTE, HOUR } from "@netlify/cache"; import type { Config, Context } from "@netlify/functions"; const cache = await caches.open("my-cache"); export default async (req: Request, context: Context) => { const request = new Request("https://example.com/expensive-api"); const cached = await cache.match(request); if (cached) { if (needsRevalidation(cached)) { context.waitUntil( fetch(request).then((fresh) => { const response = new Response(fresh.body, { headers: { ...Object.fromEntries(fresh.headers), ...cacheHeaders({ ttl: MINUTE, swr: HOUR }), }, }); return cache.put(request, response); }) ); } return cached; } const fresh = await fetch(request); const response = new Response(fresh.body, { headers: { ...Object.fromEntries(fresh.headers), ...cacheHeaders({ ttl: MINUTE, swr: HOUR }), }, }); context.waitUntil(cache.put(request, response.clone())); return response; }; export const config: Config = { path: "/api/data", };Learn more in the Cache API documentation and the caching overview.
Original source Report a problem - Feb 27, 2026
- Date parsed from source:Feb 27, 2026
- First seen by Releasebot:Feb 28, 2026
Automatic PHP bot scan blocking now live on all plans
Edge Blocking Update
Netlify now automatically blocks bot scans targeting PHP paths across all plans — no configuration required.
Previously, these bots generated noise in Observability logs and metrics. They showed up without a User-Agent header. Netlify now blocks them at the edge.
Since rolling out edge-level blocking on December 28, 2025, Netlify has blocked 2.9 billion of these requests.
Original source Report a problem