Vercel Release Notes

Last updated: Apr 3, 2026

  • Apr 2, 2026
    • Date parsed from source:
      Apr 2, 2026
    • First seen by Releasebot:
      Apr 3, 2026
    Vercel logo

    Vercel

    Custom Class Serialization in Workflow SDK

    Vercel adds custom class serialization to Workflow SDK, making it easier to pass class instances between workflow and step functions with automatic handling through the new @workflow/serde package.

    Workflow SDK now supports custom class serialization, letting you pass your own class instances between workflow and step functions.

    Workflow SDK serializes standard JavaScript types like primitives, objects, arrays, Date, Map, Set, and more. Custom class instances were previously not supported because the serialization system didn't know how to reconstruct them. With the new @workflow/serde package, you can define how your classes are serialized and deserialized by implementing two static methods using WORKFLOW_SERIALIZE and WORKFLOW_DESERIALIZE. Here's an example of how we used custom serialization in @vercel/sandbox to greatly improve DX:

    Example of how @vercel/sandbox implements workflow custom class serialization

    Once implemented, instances of your class can be passed as arguments and return values between workflow and step functions, with the serialization system handling conversion automatically.

    Example usage of the serialized Sandbox class within Workflow DevKit

    • See the full example application utilizing @vercel/sandbox and Workflow SDK.
    • Read the serialization documentation to learn more.
    Original source Report a problem
  • Apr 2, 2026
    • Date parsed from source:
      Apr 2, 2026
    • First seen by Releasebot:
      Apr 3, 2026
    Vercel logo

    Vercel

    Qwen 3.6 Plus on AI Gateway

    Vercel adds Qwen 3.6 Plus to AI Gateway, bringing stronger agentic coding, improved multimodal reasoning, a 1M context window, and better tool-calling, planning, and multilingual performance for developers using the unified model API.

    Qwen 3.6 Plus from Alibaba is now available on Vercel AI Gateway.

    Compared to Qwen 3.5 Plus, this model adds stronger agentic coding capabilities, from frontend development to repository-level problem solving, along with improved multimodal perception and reasoning. It features a 1M context window and improved performance on tool-calling, long-horizon planning, and multilingual tasks.

    To use Qwen 3.6 Plus, set model to qwen/qwen3.6-plus in the AI SDK.

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from Vercel and hundreds of other software products.

  • Apr 2, 2026
    • Date parsed from source:
      Apr 2, 2026
    • First seen by Releasebot:
      Apr 3, 2026
    Vercel logo

    Vercel

    Gemma 4 on AI Gateway

    Vercel adds Google Gemma 4 26B and 31B to AI Gateway, bringing open models with function calling, structured JSON, system instructions, native vision, and 256K context for agentic workflows across 140+ languages.

    Gemma 4 26B (MoE) and 31B (Dense) from Google are now available on Vercel AI Gateway.

    Built on the same architecture as Gemini 3, both open models support function-calling, agentic workflows, structured JSON output, and system instructions. Both support up to 256K context, 140+ languages, and native vision.

    • 26B (MoE): Activates only 3.8B of its 26B total parameters during inference, optimized for lower latency and faster tokens-per-second.
    • 31B (Dense): All parameters are active during inference, targeting higher output quality. Better suited as a foundation for fine-tuning.

    To use Gemma 4, set model to google/gemma-4-31b-it or google/gemma-4-26b-a4b-it in the AI SDK.

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    Original source Report a problem
  • Apr 2, 2026
    • Date parsed from source:
      Apr 2, 2026
    • First seen by Releasebot:
      Apr 3, 2026
    Vercel logo

    Vercel

    Zero-configuration Go backend support

    Vercel adds zero-configuration Go API backend deployments with automatic scaling and Active CPU pricing.

    Go API backends can now be deployed on Vercel with zero-configuration deployment.

    Vercel now recognizes Go servers as first-class backends and automatically provisions the right resources and configures your application without redirects in vercel.json or the /api folder convention.

    Backends on Vercel use Fluid compute with Active CPU pricing by default. Your Go API scales automatically with traffic, and you pay only for active CPU time rather than idle capacity.

    Original source Report a problem
  • Apr 1, 2026
    • Date parsed from source:
      Apr 1, 2026
    • First seen by Releasebot:
      Apr 3, 2026
    Vercel logo

    Vercel

    Chat SDK adds Zernio support

    Chat SDK adds a Zernio adapter for unified social bots across major messaging and social platforms.

    Chat SDK now supports Zernio, a unified social media API, with the new Zernio adapter. This is an official vendor adapter built and maintained by the Zernio team.

    Teams can build bots that work across Instagram, Facebook, Telegram, WhatsApp, X/Twitter, Bluesky, and Reddit through a single integration.

    Feature support varies by platform; rich cards work on Facebook, Instagram, Telegram, and WhatsApp, while editing and streaming are currently limited to Telegram.

    Read the documentation to get started, browse the directory, or build your own adapter.

    Original source Report a problem
  • Apr 1, 2026
    • Date parsed from source:
      Apr 1, 2026
    • First seen by Releasebot:
      Apr 2, 2026
    Vercel logo

    Vercel

    GLM 5V Turbo on AI Gateway

    Vercel adds GLM 5V Turbo from Z.ai to AI Gateway, bringing a multimodal coding model for screenshot-to-code generation, visual debugging, and autonomous GUI use through a unified API with usage tracking and routing tools.

    GLM 5V Turbo from Z.ai is now available on Vercel AI Gateway.

    GLM 5V Turbo is a multimodal coding model that turns screenshots and designs into code, debugs visually, and operates GUIs autonomously. It's strong at design-to-code generation, visual code generation, and navigating real GUI environments, at a smaller parameter size than comparable models.

    To use GLM 5V Turbo, set model to zai/glm-5v-turbo in the AI SDK.

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    Original source Report a problem
  • Mar 31, 2026
    • Date parsed from source:
      Mar 31, 2026
    • First seen by Releasebot:
      Apr 1, 2026
    Vercel logo

    Vercel

    Transfer Marketplace resources between teams

    Vercel adds dashboard-based Marketplace resource transfers between teams, making it easier to move supported databases like Prisma, Neon, and Supabase without the API. The new workflow supports owner and member roles and is built for smoother team and project changes.

    You can now transfer Marketplace resources between teams directly from the Vercel dashboard without relying on the API. This simplifies resource management during team or project changes. Both owner and member roles on the source and destination teams can initiate transfers.

    The destination team must have the corresponding integration installed before receiving a resource. The feature currently supports transfer databases from Prisma, Neon, and Supabase, with additional providers and product support coming soon.

    Start from your database settings in the dashboard, or learn more in the documentation.

    Original source Report a problem
  • Mar 26, 2026
    • Date parsed from source:
      Mar 26, 2026
    • First seen by Releasebot:
      Mar 27, 2026
    Vercel logo

    Vercel

    Vercel plugin now supported on OpenAI Codex and Codex CLI

    Vercel adds Codex plugin support for OpenAI Codex and the Codex CLI to boost development workflows.

    The Vercel plugin now supports OpenAI Codex and the Codex CLI.

    With the plugin, teams can access over 39 platform skills, three specialist agents, and real-time code validation to assist with their development workflow.

    Install it in the Codex app or from the Codex CLI:

    codex/plugins
    

    Learn more about the plugin in the documentation.

    Original source Report a problem
  • Mar 26, 2026
    • Date parsed from source:
      Mar 26, 2026
    • First seen by Releasebot:
      Mar 27, 2026
    Vercel logo

    Vercel

    Automatic persistence now in beta on Vercel Sandbox

    Vercel adds beta persistent sandboxes that automatically save filesystem state when stopped and restore it on resume, making long-running sessions easier to manage. The beta SDK and CLI also add lifetime management, session inspection, and config controls.

    Vercel Sandboxes can now automatically save their filesystem state when stopped and restore it when resumed. This removes the need for manual snapshots, making it easier to run long-running, durable sandboxes that continue where you left off.

    How it works

    A sandbox is the durable identity, now identified by a name, its filesystem state, and configuration options. A session is the compute tied to that state, invoked as needed.

    Automatic persistence introduces orchestration that separates storage from compute, reducing the need for manual snapshotting, so:

    • when you stop a sandbox, the session shuts down but the filesystem is automatically snapshotted.
    • when you resume, a new session boots from that snapshot. This state storage is not charged, so you pay when your setting is active.

    Persistence is enabled by default in the beta SDK, can be disabled between sessions with persistent: false. When disabled, the sandbox still exists after being stopped and can be resumed by its name, but each session starts with a clean filesystem.

    If a sandbox is stopped and you run a command, the SDK will transparently create a new session, so you don't need to check state or manually restart

    The beta SDK adds methods for managing sandboxes over their lifetime:

    // Create a named sandbox
    const sandbox = await Sandbox.create({ name: 'user-a-workspace' });
    await sandbox.runCommand('npm', ['install']);
    await sandbox.stop();
    // Later, resume where you left off
    const sandbox = await Sandbox.get({ name: 'user-a-workspace' });
    await sandbox.runCommand('npm', ['run', 'dev']);
    

    The beta CLI adds configuration management and session inspection:

    # Spin up a sandbox for a user
    sandbox create --name user-alice
    # User runs commands — if the sandbox timed out, it resumes automatically
    sandbox run --name user-alice -- npm test
    # Check what happened across sessions
    sandbox sessions list user-alice
    # Tune resources without recreating
    sandbox config vcpus user-alice 4
    sandbox config timeout user-alice 5h
    

    This feature is in beta and requires upgrading to the beta SDK and CLI packages.

    Install the beta packages to try persistent sandboxes today:

    pnpm install @vercel/sandbox@beta for the SDK, and
    pnpm install -g sandbox@beta for the CLI.

    Persistent sandboxes are available in beta on all plans.

    Learn more in the documentation.

    Original source Report a problem
  • Mar 26, 2026
    • Date parsed from source:
      Mar 26, 2026
    • First seen by Releasebot:
      Mar 27, 2026
    Vercel logo

    Vercel

    Vercel Sandboxes now allow unique, customizable names

    Vercel adds named sandboxes to the latest beta SDK and CLI, making sessions easier to find, resume, and manage with customizable names, configuration tools, and session inspection.

    Vercel Sandboxes created with the latest beta package will now have a unique, customizable name within your project, replacing the previous ID-based identification. Names make sandboxes easy to find and reference:

    // Create a named sandbox
    const sandbox = await Sandbox.create({ name: 'user-a-workspace' });
    await sandbox.runCommand('npm', ['install']);
    await sandbox.stop();
    // Later, resume where you left off
    const sandbox = await Sandbox.get({ name: 'user-a-workspace' });
    await sandbox.runCommand('npm', ['run', 'dev']);
    

    The beta CLI adds configuration management and session inspection:

    # Spin up a sandbox for a user
    sandbox create --name user-alice
    # User runs commands — if the sandbox timed out, it resumes automatically
    sandbox run --name user-alice -- npm test
    # Check what happened across sessions
    sandbox sessions list user-alice
    

    Named sandboxes are the mechanism for identifying automatic persistence, which allows your session to be more easily identified for at both time of creation resumption.

    Install the beta packages to try named sandboxes today:

    pnpm install @vercel/sandbox@beta for the SDK, and
    pnpm install -g sandbox@beta for the CLI.

    Learn more in the documentation.

    Original source Report a problem

Related products