DigitalOcean Release Notes

Last updated: Apr 4, 2026

DigitalOcean Products

All DigitalOcean Release Notes (80)

  • Apr 3, 2026
    • Date parsed from source:
      Apr 3, 2026
    • First seen by Releasebot:
      Apr 4, 2026
    DigitalOcean logo

    DigitalOcean

    3 April

    DigitalOcean deprecates Meta Llama 3.1 8B-Instruct and Mistral NeMo in Gradient AI Platform and Model Catalog.

    The following models are deprecated from DigitalOcean Gradient™ AI Platform:

    • Meta Llama 3.1 8B-Instruct
    • Mistral NeMo

    Migrate to a supported active model to avoid service disruption. For information on our model deprecation policy and recommended replacement models, see Model Support Policy.

    The following models are deprecated from the Model Catalog:

    • Meta Llama 3.1 8B-Instruct
    • Mistral NeMo

    Migrate to Llama 3.3 70B-Instruct (llama3.3-70b-instruct) and gpt-oss-20b (openai-gpt-oss-20b) models respectively, to avoid service disruption.

    Original source Report a problem
  • Apr 1, 2026
    • Date parsed from source:
      Apr 1, 2026
    • First seen by Releasebot:
      Apr 1, 2026
    DigitalOcean logo

    DigitalOcean

    Now Available: Cloud Security Posture Management (CSPM)

    DigitalOcean adds agentless in-dashboard CSPM visibility to find misconfigurations and guide faster remediation.

    Get agentless, in-dashboard visibility into your DigitalOcean infrastructure to quickly find and fix misconfigurations. CSPM helps you prioritize risks, evaluate and implement guided remediation, and stay on top of your security posture without adding new tools.

    Start your first scan now ->

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from DigitalOcean and hundreds of other software products.

  • Apr 1, 2026
    • Date parsed from source:
      Apr 1, 2026
    • First seen by Releasebot:
      Apr 1, 2026
    DigitalOcean logo

    DigitalOcean

    1 April

    DigitalOcean adds Trinity Large in public preview for serverless inference and agent development on Gradient AI Platform and Inference Hub.

    The following Acree model is now available on DigitalOcean Gradient™ AI Platform for serverless inference and Agent Development Kit:

    • Trinity Large (Public Preview)

    For more information, see the Available Models page.

    The following Arcee model is now available on DigitalOcean Gradient™ AI Inference Hub for serverless inference:

    • Trinity Large (Public Preview)

    For more information, see the Available Models page.

    Original source Report a problem
  • Mar 31, 2026
    • Date parsed from source:
      Mar 31, 2026
    • First seen by Releasebot:
      Apr 1, 2026
    DigitalOcean logo

    DigitalOcean

    31 March

    DigitalOcean adds broader security and GPU capabilities across Kubernetes and Droplets, including general availability of control plane firewalls, CSPM, and NVIDIA B300 GPU options, plus single-node B300 GPU worker nodes for DOKS by contract only.

    • NVIDIA B300 GPUs are now available as single-node GPU worker nodes in DigitalOcean Kubernetes (DOKS), by contract only. To add B300 GPU nodes to your cluster, contact sales. Learn more about GPU worker nodes.

    • Control plane firewalls for DigitalOcean Kubernetes are now in general availability. Control plane firewalls restrict access to your cluster’s API server to a set of allowed IP addresses. Worker node IPs are automatically kept in sync as nodes scale up or down.

      You can enable control plane firewalls using the DigitalOcean API, doctl, or Terraform.

    • NVIDIA B300 GPUs are now generally available in RIC1, by contract only. B300 GPUs are available in 1- and 8-GPU configurations for GPU Droplets via the control panel and via the API using slugs gpu-b300x1-288gb (1 GPU) and gpu-b300x8-2304gb (8 GPUs). Learn more about GPU Droplet plans.

    • Cloud Security Posture Management (CSPM) is now generally available. CSPM evaluates your DigitalOcean resources for misconfigurations and security risks, surfaces findings by severity, and provides guided remediation to help you resolve them. For more information, see the CSPM documentation.

    Original source Report a problem
  • Mar 30, 2026
    • Date parsed from source:
      Mar 30, 2026
    • First seen by Releasebot:
      Mar 31, 2026
    DigitalOcean logo

    DigitalOcean

    30 March

    DigitalOcean releases Private Droplets in public preview with VPC-only networking and no direct public connectivity by default.

    Private Droplets are now in public preview.

    Private Droplets have no direct public connectivity by default, using VPC-only networking with automatic integration with VPC NAT gateway, VPC peering, and VPC private DNS.

    All customers can opt in from the Feature Preview page. Create Private Droplets by setting public_networking: false in the Create Droplet API.

    Original source Report a problem
  • Mar 27, 2026
    • Date parsed from source:
      Mar 27, 2026
    • First seen by Releasebot:
      Mar 27, 2026
    • Modified by Releasebot:
      Mar 28, 2026
    DigitalOcean logo

    DigitalOcean

    27 March

    DigitalOcean adds new OpenAI models to Gradient AI Platform and AI Inference Hub, including GPT-5.4 mini, nano, pro, and GPT Image 1.5. DigitalOcean also brings automatic garbage collection to Container Registry in public preview to help free storage in the background.

    • The following OpenAI models are now available on DigitalOcean Gradient™ AI Platform for serverless inference and Agent Development Kit:
      • GPT-5.4 mini
      • GPT-5.4 nano
      • GPT-5.4 pro
      • GPT Image 1.5

    For more information, see the Available Models page.

    • The following OpenAI models are now available on DigitalOcean Gradient™ AI Inference Hub for serverless inference:

      • GPT-5.4 mini
      • GPT-5.4 nano
      • GPT-5.4 pro
      • GPT Image 1.5
    • Automatic garbage collection for DigitalOcean Container Registry (DOCR) is now available in public preview. When enabled, DOCR automatically cleans up unreferenced image layers and untagged manifests in the background, freeing storage without requiring manual garbage collection runs or read-only downtime. For more details, see Automatic Garbage Collection.

    Original source Report a problem
  • Mar 25, 2026
    • Date parsed from source:
      Mar 25, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    DigitalOcean logo

    DigitalOcean

    25 March

    DigitalOcean adds App Platform Scale to Zero in private preview to sleep idle web services and cut costs.

    • Now in private preview, App Platform’s Scale to Zero feature automatically puts unused web service components to sleep after a configurable period of inactivity and wakes them when they receive external traffic. This helps reduce costs for web services with periods of low or no traffic.
    Original source Report a problem
  • Mar 17, 2026
    • Date parsed from source:
      Mar 17, 2026
    • First seen by Releasebot:
      Mar 18, 2026
    DigitalOcean logo

    DigitalOcean

    17 March

    DigitalOcean releases Nemotron-3-Super-120B on Gradient AI Platform for serverless inference and agents (Public Preview).

    The following NVIDIA model is now available on DigitalOcean Gradient™ AI Platform for serverless inference, Agent Development Kit, and agents:

    • Nemotron-3-Super-120B (Public Preview)

    For more information, see the Available Models page.

    Original source Report a problem
  • Mar 16, 2026
    • Date parsed from source:
      Mar 16, 2026
    • First seen by Releasebot:
      Mar 16, 2026
    • Modified by Releasebot:
      Mar 17, 2026
    DigitalOcean logo

    DigitalOcean

    16 March

    DigitalOcean releases a new Richmond VA datacenter (ric1) with GPU Droplets and Kubernetes, plus related regional availability. It also announces Gradient AI Dedicated Inference Service and Gradient AI Inference Hub in public preview with model catalog and testing tools. NVIDIA B300 GPUs join ric1 in private preview; AMD Instinct MI350X GPUs available by contract.

    Release notes

    • We have launched the Richmond, Virginia, USA (ric1) datacenter, which supports GPU Droplets, Kubernetes, and many other products. Learn more in the regional availability matrix.

    • Gradient AI Dedicated Inference Service is a managed LLM hosting service for optimized inference on dedicated GPUs, now available in public preview and enabled for all users. For more information, see Use Dedicated Inference.

    • DigitalOcean Gradient™ AI Inference Hub is now available in public preview and is enabled for all users. Inference Hub provides access to a catalog of foundation models with support for serverless inference and dedicated inference, along with a Model Playground for testing models before deployment.

    During the public preview period, features and model availability may change.

    • NVIDIA B300 GPUs are now available in RIC1 as a private preview, by contract only. B300 GPUs are available in 1- and 8-GPU configurations for GPU Droplets via the control panel and via the API using slugs gpu-b300x1-288gb (1 GPU) and gpu-b300x8-2304gb (8 GPUs). Learn more about GPU Droplet plans.

    • AMD Instinct MI350X GPUs are now available in RIC1 by contract only in 1- and 8-GPU configurations for single- and multi-node GPU Droplets. To create GPU Droplets with MI350X GPUs, contact sales. Learn more about GPU Droplet plans.

    Original source Report a problem
  • Mar 13, 2026
    • Date parsed from source:
      Mar 13, 2026
    • First seen by Releasebot:
      Mar 14, 2026
    DigitalOcean logo

    DigitalOcean

    13 March

    DigitalOcean announces a new Control Panel view of resource usage and limits for team owners and resource modifiers, helping track capacity and request limit increases. It also adds Namespace Access Keys for Functions with per user credentials and automatic revocation for removed members; legacy tokens are deprecated and will be removed on June 3 2026.

    Resource usage and limits

    • Team owners and resource modifiers can now view resource usage and limits in the DigitalOcean Control Panel. You can use this interface to understand resource capacity, manage resource growth, and initiate support requests to increase limits when needed. For more information, see View Resource Limits.

    Namespace access keys (Functions)

    • Namespace access keys are now available for Functions. They provide user-specific credentials per namespace, so you can create a key for each user or application and revoke access individually. Keys linked to removed team members are revoked automatically. The legacy shared namespace token is deprecated and will be removed on 3 June 2026. During the migration period, both methods work. After 3 June 2026, legacy tokens will no longer authenticate.
    • Visit How to Manage Namespace Access Keys to learn more about managing namespace access keys.
    Original source Report a problem

Related vendors