LaunchDarkly Release Notes

Last updated: Nov 20, 2025

  • Nov 13, 2025
    • Parsed from source:
      Nov 13, 2025
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly AI SDK Vercel Provider for Server-Side JavaScript

    LaunchDarkly rolls out AI SDK Vercel Provider for Server-Side JavaScript in alpha, enabling AI model integration with LaunchDarkly flags. Quick setup and provider packages get you started, but note this is not production ready.

    LaunchDarkly AI SDK Vercel Provider for Server-Side JavaScript

    ⛔️⛔️⛔️⛔️
    Caution
    This library is a alpha version and should not be considered ready for production use while this message is visible.

    ☝️☝️☝️☝️☝️☝️

    LaunchDarkly overview

    LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!

    Quick Setup

    This package provides Vercel AI SDK integration for the LaunchDarkly AI SDK. The simplest way to use it is with the LaunchDarkly AI SDK's initChat method:

    • Install the required packages:
      npm install @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-vercel --save
      # or
      # yarn add @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-vercel
      
    • Create a chat session and use it:
      import { init } from '@launchdarkly/node-server-sdk';
      import { initAi } from '@launchdarkly/server-sdk-ai';
      // Initialize LaunchDarkly client
      const ldClient = init(sdkKey);
      const aiClient = initAi(ldClient);
      // Create a chat session
      const defaultConfig = { enabled: true, model: { name: 'gpt-4' }, provider: { name: 'openai' } };
      const chat = await aiClient.initChat('my-chat-config', context, defaultConfig);
      if (chat) {
        const response = await chat.invoke('What is the capital of France?');
        console.log(response.message.content);
      }
      
    • For more information about using the LaunchDarkly AI SDK, see the LaunchDarkly AI SDK documentation.

    Vercel AI Provider Installation

    Important: You will need to install additional provider packages for the specific AI models you want to use. The Vercel AI SDK requires separate packages for each provider.
    When creating a new Vercel AI model, LaunchDarkly uses an AI Config and the Vercel AI SDK's provider system to create a model instance. You should install all Vercel AI provider packages for each provider you plan to use in your AI Config to ensure they can be properly instantiated.

    Installing a Vercel AI Provider

    To use specific AI models, install the corresponding provider package:

    • For OpenAI models
      npm install @ai-sdk/openai --save
      # or
      yarn add @ai-sdk/openai
      

    For a complete list of available providers and installation instructions, see the Vercel AI SDK Providers documentation.

    Advanced Usage

    For more control, you can use the Vercel AI provider package directly with LaunchDarkly configurations:

    import { VercelProvider } from '@launchdarkly/server-sdk-ai-vercel';
    import { generateText } from 'ai';
    
    // Create a Vercel AI model from LaunchDarkly configuration
    const model = await VercelProvider.createVercelModel(aiConfig);
    // Convert LaunchDarkly messages and add user message
    const configMessages = aiConfig.messages || [];
    const userMessage = { role: 'user', content: 'What is the capital of France?' };
    const allMessages = [...configMessages, userMessage];
    // Track the model call with LaunchDarkly tracking
    const response = await aiConfig.tracker.trackMetricsOf(
      VercelProvider.getAIMetricsFromResponse,
      () => generateText({ model, messages: allMessages })
    );
    console.log('AI Response:', response.text);
    

    Contributing

    We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.

    About LaunchDarkly

    • LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
      • Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
      • Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
      • Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
      • Grant access to certain features based on user attributes, like payment plan (eg: users on the 'gold' plan get access to more features than users in the 'silver' plan).
      • Disable parts of your application to facilitate maintenance, without taking everything offline.
    • LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
    • Explore LaunchDarkly
      • launchdarkly.com for more information
      • docs.launchdarkly.com for our documentation and SDK reference guides
      • apidocs.launchdarkly.com for our API documentation
      • blog.launchdarkly.com for the latest product updates
    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly Android SDK observability plugin Early Access

    LaunchDarkly unveils Android observability in early access with plugins for errors, logs, tracing and session replay. Get started with configurable options, privacy masking and step by step setup to instrument your Android app with the observability SDKs.

    Overview

    The LaunchDarkly observability features are available for early access features in the LaunchDarkly UI are publicly available in early access.
    The observability SDKs, implemented as plugins for LaunchDarkly server-side and client-side SDKs, are designed for use with the in-app observability features. They are currently in available in Early Access, and APIs are subject to change until a 1.x version is released.
    If you are interested in participating in the Early Access Program for upcoming observability SDKs, sign up here.

    SDK quick links

    LaunchDarkly’s SDKs are open source. In addition to this reference guide, we provide source, API reference documentation, and a sample application:

    • SDK API documentation: Observability plugin API docs
    • GitHub repository: @launchdarkly/observability-android
    • Published module: Maven

    Prerequisites and dependencies

    This reference guide assumes that you are somewhat familiar with the LaunchDarkly Android SDK.
    The observability plugin is compatible with the Android SDK, version 5.9.0 and later.
    The LaunchDarkly Android SDK is compatible with Android SDK versions 21 and higher (Android 5.0, Lollipop).

    Get started

    Follow these steps to get started:

    • Install the plugin
    • Initialize the Android SDK client
    • Configure the plugin options
    • Configure additional instrumentations
    • Configure session replay
    • Explore supported features
    • Review observability data in LaunchDarkly

    Install the plugin

    LaunchDarkly uses a plugin to the Android SDK to provide observability.
    The first step is to make both the SDK and the observability plugin available as dependencies.
    Here’s how:

    implementation 'com.launchdarkly:launchdarkly-android-client-sdk:5.+'
    implementation 'com.launchdarkly:launchdarkly-observability-android:0.5.0'
    

    Then, import the plugin into your code:

    import com.launchdarkly.sdk.*;
    import com.launchdarkly.sdk.android.*;
    import com.launchdarkly.observability.plugin.Observability
    import com.launchdarkly.sdk.android.integrations.Plugin
    

    Initialize the client
    Next, initialize the SDK and the plugin.
    To initialize, you need your LaunchDarkly environment’s mobile key and the context for which you want to evaluate flags. This authorizes your application to connect to a particular environment within LaunchDarkly. To learn more, read Initialize the client in the Android SDK reference guide.

    Android observability SDK credentials
    The Android observability SDK uses a mobile key. Keys are specific to each project and environment. They are available from Project settings, on the Environments list. To learn more about key types, read Keys.
    Mobile keys are not secret and you can expose them in your client-side code without risk. However, never embed a server-side SDK key into a client-side application.

    Here’s how to initialize the SDK and plugin:

    LDConfig ldConfig = new LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(Components.plugins().setPlugins(
    Collections.singletonList<Plugin>(Observability(this.getApplication()))
    ))
    // other options
    .build();
    
    // You'll need this context later, but you can ignore it for now.
    LDContext context = LDContext.create("context-key-123abc");
    LDClient client = LDClient.init(this.getApplication(), ldConfig, context, 0);
    

    Configure the plugin options

    You can configure options for the observability plugin when you initialize the SDK. The plugin constructor takes an optional object with the configuration details.
    Here is an example:

    val ldConfig = new LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    Collections.singletonList<Plugin>(
    Observability(
    this@BaseApplication,
    Options(
    resourceAttributes = Attributes.of(
    AttributeKey.stringKey("serviceName"), "example-service"
    )
    )
    )
    )
    )
    .build();
    

    For more information on plugin options, read Configuration for client-side observability.

    Configure additional instrumentations

    To enable HTTP request instrumentation and user interaction instrumentation, add the following plugin and dependencies to your top level application’s Gradle file.

    plugins {
    id 'net.bytebuddy.byte-buddy-gradle-plugin' version '1.+'
    }
    dependencies {
    // Android HTTP Url instrumentation
    implementation 'io.opentelemetry.android.instrumentation:httpurlconnection-library:0.11.0-alpha'
    byteBuddy 'io.opentelemetry.android.instrumentation:httpurlconnection-agent:0.11.0-alpha'
    
    // OkHTTP instrumentation
    implementation 'io.opentelemetry.android.instrumentation:okhttp3-library:0.11.0-alpha'
    byteBuddy 'io.opentelemetry.android.instrumentation:okhttp3-agent:0.11.0-alpha'
    }
    

    Configure session replay

    The Android SDK supports session replay, which captures snapshots of your app’s UI at regular intervals. This allows you to visually review user sessions in LaunchDarkly to better understand user behavior and diagnose issues.
    To enable session replay, add the ReplayInstrumentation to the instrumentations list when configuring the observability plugin.
    Here’s how:

    import com.launchdarkly.observability.replay.ReplayInstrumentation
    import com.launchdarkly.observability.replay.ReplayOptions
    import com.launchdarkly.observability.replay.PrivacyProfile
    
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    resourceAttributes = Attributes.of(
    AttributeKey.stringKey("serviceName"), "example-service"
    ),
    instrumentations = listOf(
    ReplayInstrumentation()
    )
    )
    )
    )
    )
    .build()
    

    Session replay configuration options
    You can customize session replay behavior by passing a ReplayOptions object to the ReplayInstrumentation constructor:

    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = true,
    maskSensitive = true
    ),
    serviceName = "example-service",
    serviceVersion = "1.0.0",
    debug = false
    )
    )
    )
    )
    )
    )
    .build()
    

    The available configuration options are:

    • privacyProfile: Controls how UI elements are masked in the replay. To learn more, read Privacy options.
    • serviceName: A name for your service. Defaults to “observability-android”.
    • serviceVersion: Version of your service. Defaults to the SDK version.
    • backendUrl: The backend URL for sending replay data. Defaults to LaunchDarkly’s backend.
    • debug: Enables verbose logging when set to true. Defaults to false.

    Privacy options
    The PrivacyProfile class controls how UI elements are masked during session replay. Session replay for Android uses Jetpack Compose semantics to identify and mask UI elements. By default, all masking options are enabled to protect user privacy.
    Here’s how to configure privacy settings:

    import com.launchdarkly.observability.replay.PrivacyProfile
    import com.launchdarkly.observability.replay.MaskMatcher
    
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true
    )
    )
    )
    )
    )
    )
    .build()
    

    The available privacy options are:

    • maskTextInputs: When true, masks all text input fields including editable text and paste operations. Defaults to true.
    • maskText: When true, masks all text elements in the UI. Defaults to true.
    • maskSensitive: When true, masks sensitive views that contain password fields or text matching sensitive keywords. Defaults to true.

    Sensitive keywords
    When maskSensitive is enabled, the SDK automatically masks any Compose UI text or content descriptions containing predetermined keywords. Keywords you specify are not case sensitive. For the current set of keywords, read PrivacyProfile.

    Common privacy configurations
    For maximum privacy (recommended for production):

    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = true,
    maskSensitive = true
    )
    

    For debugging or development, you can turn masking off:

    privacyProfile = PrivacyProfile(
    maskTextInputs = false,
    maskText = false,
    maskSensitive = false
    )
    

    For selective masking, which masks inputs and sensitive data but shows regular text:

    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true
    )
    

    Custom masking with MaskMatcher
    You can implement custom masking logic using the MaskMatcher interface. This allows you to define your own rules for which UI elements should be masked.
    Here’s how:

    import androidx.compose.ui.semantics.SemanticsNode
    import androidx.compose.ui.semantics.SemanticsProperties
    import androidx.compose.ui.semantics.getOrNull
    import com.launchdarkly.observability.replay.MaskMatcher
    import com.launchdarkly.observability.replay.PrivacyProfile
    
    // Create a custom matcher that masks elements with specific test tags
    class CustomTestTagMatcher : MaskMatcher {
    override fun isMatch(node: SemanticsNode): Boolean {
    val testTag = node.config.getOrNull(SemanticsProperties.TestTag)
    return testTag == "sensitive-data" || testTag == "pii"
    }
    }
    
    // Use the custom matcher in your privacy profile
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true,
    maskAdditionalMatchers = listOf(CustomTestTagMatcher())
    )
    )
    )
    )
    )
    )
    .build()
    

    The MaskMatcher interface requires implementing a single method:

    • isMatch(node: SemanticsNode): Boolean - Returns true if the node should be masked, false otherwise.
      Custom matchers should execute synchronously and avoid heavy operations to prevent performance issues during screen captures.

    For more information on session replay configuration, read Configuration for session replay.

    Explore supported features
    The observability plugins supports the following features. After the SDK and plugins are initialized, you can access these from within your application:

    • Configuration for client-side observability
    • Configuration for session replay
    • Errors
    • Logs
    • Metrics
    • Tracing

    Review observability data in LaunchDarkly
    After you initialize the SDK and observability plugin, your application automatically starts sending observability data back to LaunchDarkly, including errors and logs. You can review this information in the LaunchDarkly user interface. To learn how, read Observability.

    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly AI SDK for Server-Side JavaScript

    LaunchDarkly unveils the AI SDK for server‑side JavaScript in alpha, bringing AI config, TrackedChat, and provider integrations to the platform. Ready for quick setup with defaults, configuration retrieval, and built‑in metrics while using LangChain or custom providers.

    LaunchDarkly AI SDK for Server-Side JavaScript

    ⛔️⛔️⛔️⛔️
    Caution
    This library is a alpha version and should not be considered ready for production use while this message is visible.

    ☝️☝️☝️☝️☝️☝️

    LaunchDarkly overview
    LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!

    Quick Setup

    • This assumes that you have already installed the LaunchDarkly Node.js (server-side) SDK, or a compatible edge SDK.
    • Install this package with npm or yarn :
      npm install @launchdarkly/server-sdk-ai --save
      # or yarn add @launchdarkly/server-sdk-ai
      
    • Create an AI SDK instance:
      // The ldClient instance should be created based on the instructions in the relevant SDK.
      const aiClient = initAi(ldClient);
      

    Setting Default AI Configurations

    When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:

    • Fully Configured Default

      const defaultConfig = {
        enabled: true,
        model: {
          name: 'gpt-4',
          parameters: {
            temperature: 0.7,
            maxTokens: 1000
          }
        },
        messages: [
          {
            role: 'system',
            content: 'You are a helpful assistant.'
          }
        ]
      };
      
    • Disabled Default

      const defaultConfig = {
        enabled: false
      };
      

    Retrieving AI Configurations

    The config method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:

    const aiConfig = await aiClient.config(aiConfigKey, context, defaultConfig, {
      myVariable: 'My User Defined Variable'
    } // Variables for template interpolation
    );
    
    // Ensure configuration is enabled
    if (aiConfig.enabled) {
      const { messages, model, tracker } = aiConfig;
      // Use with your AI provider
    }
    

    TrackedChat for Conversational AI

    TrackedChat provides a high-level interface for conversational AI with automatic conversation management and metrics tracking:

    • Automatically configures models based on AI configuration
    • Maintains conversation history across multiple interactions
    • Automatically tracks token usage, latency, and success rates
    • Works with any supported AI provider (see AI Providers for available packages)

    Using TrackedChat

    • Use the same defaultConfig from the retrieval section above
      const chat = await aiClient.createChat('customer-support-chat', context, defaultConfig, {
        customerName: 'John'
      });
      
    • If (chat) {
      • Simple conversation flow - metrics are automatically tracked by invoke()
      • const response1 = await chat.invoke('I need help with my order');
        console.log(response1.message.content);
        const response2 = await chat.invoke("What's the status?");
        console.log(response2.message.content);
        
      • Access conversation history
      • const messages = chat.getMessages();
        console.log(`Conversation has ${messages.length} messages`);
        
      • }

    Advanced Usage with Providers

    For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:

    Using AI Provider Packages
    import { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain';
    const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
    // Create LangChain model from configuration
    const llm = await LangChainProvider.createLangChainModel(aiConfig);
    // Use with tracking
    const response = await aiConfig.tracker.trackMetricsOf(LangChainProvider.getAIMetricsFromResponse, () => llm.invoke(messages));
    console.log('AI Response:', response.content);
    
    Using Custom Providers
    import { LDAIMetrics } from '@launchdarkly/server-sdk-ai';
    const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
    // Define custom metrics mapping for your provider
    const mapCustomProviderMetrics = (response: any): LDAIMetrics => ({
      success: true,
      usage: {
        total: response.usage?.total_tokens || 0,
        input: response.usage?.prompt_tokens || 0,
        output: response.usage?.completion_tokens || 0,
      }
    });
    // Use with custom provider and tracking
    const result = await aiConfig.tracker.trackMetricsOf(mapCustomProviderMetrics, () => customProvider.generate({
      messages: aiConfig.messages || [],
      model: aiConfig.model?.name || 'custom-model',
      temperature: aiConfig.model?.parameters?.temperature ?? 0.5,
    }));
    console.log('AI Response:', result.content);
    

    Contributing

    We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.

    About LaunchDarkly

    • LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
      • Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
      • Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
      • Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
      • Grant access to certain features based on user attributes, like payment plan (eg: users on the ‘gold’ plan get access to more features than users in the ‘silver’ plan).
      • Disable parts of your application to facilitate maintenance, without taking everything offline.
    • LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
    • Explore LaunchDarkly
      • launchdarkly.com for more information
      • docs.launchdarkly.com for our documentation and SDK reference guides
      • apidocs.launchdarkly.com for our API documentation
      • blog.launchdarkly.com for the latest product updates
    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    iOS SDK observability reference

    LaunchDarkly unveils an iOS observability plugin with early access Session Replay, enabling errors, logs, metrics and traces directly from Swift apps. The guide walks you through setup, config, privacy controls, and OpenTelemetry integration.

    Overview

    The LaunchDarkly observability features are available for early access features in the LaunchDarkly UI are publicly available in early access.
    The observability SDKs, implemented as plugins for LaunchDarkly server-side and client-side SDKs, are designed for use with the in-app observability features. They are currently in available in Early Access, and APIs are subject to change until a 1.x version is released.
    If you are interested in participating in the Early Access Program for upcoming observability SDKs, sign up here.

    Overview

    This topic documents how to get started with the LaunchDarkly observability plugin for the iOS SDK.
    The iOS SDK supports the observability plugin for error monitoring, logging, and tracing.

    SDK quick links

    LaunchDarkly’s SDKs are open source. In addition to this reference guide, we provide source, API reference documentation, and a sample application:

    • SDK API documentation: Observability plugin API docs
    • GitHub repository: swift-launchdarkly-observability
    • Published module: Swift Package Manager

    Prerequisites and dependencies

    This reference guide assumes that you are somewhat familiar with the LaunchDarkly iOS SDK.
    The observability plugin is compatible with the iOS SDK, version 9.14.0 and later, and is only available if you are using Swift.

    Get started

    Follow these steps to get started:

    • Install the plugin
    • Initialize the iOS SDK client
    • Configure the plugin options
    • Explore supported features
    • Review observability data in LaunchDarkly

    Install the plugin

    LaunchDarkly uses a plugin to the iOS SDK to provide observability.
    The first step is to make both the SDK and the observability plugin available as dependencies.
    Here’s how:

    Package.swift, using Swift Package Manager

    //...
    dependencies: [
    .package(url: "https://github.com/launchdarkly/ios-client-sdk.git", .upToNextMinor("9.0.0")),
    .package(url: "https://github.com/launchdarkly/swift-launchdarkly-observability.git", .upToNextMinor("1.0.0")),
    ],
    targets: [
    .target(
    name: "YOUR_TARGET",
    dependencies: ["LaunchDarkly"]
    )
    ],
    //...
    

    Then, import the plugin into your code:

    import LaunchDarkly
    import LaunchDarklyObservability
    import OpenTelemetryApi
    

    Initialize the client

    Next, initialize the SDK and the plugin.
    To initialize, you need your LaunchDarkly environment’s mobile key. This authorizes your application to connect to a particular environment within LaunchDarkly. To learn more, read Initialize the client in the Android SDK reference guide.
    Here’s how to initialize the SDK and plugin:

    iOS SDK v9.14+ (Swift)

    let config = LDConfig(mobileKey: "mobile-key-123abc", autoEnvAttributes: .enabled)
    config.plugins = [Observability()]
    
    let contextBuilder = LDContextBuilder(key: "context-key-123abc")
    guard case .success(let context) = contextBuilder.build() else { return }
    
    LDClient.start(config: config, context: context, startWaitSeconds: 5) { timedOut in
      if timedOut {
        // Client may not have the most recent flags for the configured context
      } else {
        // Client has received flags for the configured context
      }
    }
    

    Configure the plugin options

    You can configure options for the observability plugin when you initialize the SDK. The plugin constructor takes an optional object with the configuration details.
    Here is an example:

    Plugin options, iOS SDK v9.14+
    // Create configuration with custom options

    let configuration = Configuration(
      serviceName: "MyApp",
      otlpEndpoint: "https://otel.observability.app.launchdarkly.com:4318",
      serviceVersion: "1.2.3",
      resourceAttributes: [
        "environment": .string("production"),
        "team": .string("mobile-team"),
        "app.version": .string("1.2.3")
      ],
      customHeaders: [("Custom-Header", "header-value")],
      sessionTimeout: 30 * 60, // 30 minutes in seconds
      isDebug: false,
      isErrorTrackingDisabled: false,
      isLogsDisabled: false,
      isTracesDisabled: false,
      isMetricsDisabled: false
    )
    
    // Create the observability plugin with configuration
    let observabilityPlugin = Observability(configuration: configuration)
    
    let config = LDConfig(mobileKey: "mobile-key-123abc", autoEnvAttributes: .enabled)
    config.plugins = [observabilityPlugin]
    

    Configuration options

    The Configuration struct provides the following parameters:

    • serviceName: The service name for the application. Defaults to “App”.
    • otlpEndpoint: The endpoint URL for the OTLP exporter. Defaults to LaunchDarkly’s endpoint.
    • serviceVersion: The service version for the application. Defaults to “1.0.0”.
    • resourceAttributes: Additional OpenTelemetry resource attributes to include in telemetry data.
    • customHeaders: Custom headers to include with OTLP exports as key-value tuples.
    • sessionTimeout: Session timeout in seconds. Defaults to 30 minutes (1800 seconds).
    • isDebug: Enables additional logging for debugging. Defaults to false.
    • isErrorTrackingDisabled: Disables automatic error tracking if true. Defaults to false.
    • isLogsDisabled: Disables automatic log collection if true. Defaults to false.
    • isTracesDisabled: Disables automatic trace collection if true. Defaults to false.
    • isMetricsDisabled: Disables automatic metric collection if true. Defaults to false.

    Manual instrumentation

    After initializing the observability plugin, you can use the LDObserve singleton to manually instrument your iOS application with custom metrics, logs, traces, and error reporting.

    Recording custom metrics

    Record metrics

    // Record a point-in-time metric
    LDObserve.shared.recordMetric(metric: Metric(name: "response_time_ms", value: 250.0))
    
    // Record metrics with attributes
    let attributes: [String: AttributeValue] = [
      "endpoint": .string("/api/users"),
      "method": .string("GET")
    ]
    LDObserve.shared.recordMetric(metric: Metric(
      name: "api_call_duration",
      value: 120.5,
      attributes: attributes
    ))
    
    // Record different metric types
    LDObserve.shared.recordCount(metric: Metric(name: "button_clicks", value: 1.0))
    LDObserve.shared.recordIncr(metric: Metric(name: "page_views", value: 1.0))
    LDObserve.shared.recordHistogram(metric: Metric(name: "request_size_bytes", value: 1024.0))
    LDObserve.shared.recordUpDownCounter(metric: Metric(name: "active_connections", value: 5.0))
    

    Recording custom logs

    Record logs

    // Record a basic log message
    LDObserve.shared.recordLog(
      message: "User login successful",
      severity: .info,
      attributes: [:]
    )
    
    // Record logs with custom attributes
    let logAttributes: [String: AttributeValue] = [
      "user_id": .string("12345"),
      "action": .string("login")
    ]
    LDObserve.shared.recordLog(
      message: "Authentication completed",
      severity: .info,
      attributes: logAttributes
    )
    

    Recording custom errors

    Record errors

    do {
      // Some operation that might fail
      try performNetworkRequest()
    } catch {
      // Record the error with context
      let errorAttributes: [String: AttributeValue] = [
        "operation": .string("network_request"),
        "endpoint": .string("/api/data")
      ]
      LDObserve.shared.recordError(error: error, attributes: errorAttributes)
    }
    

    Recording custom traces

    Record traces

    // Start a span and end it manually
    let attributes: [String: AttributeValue] = [
      "table": .string("users"),
      "operation": .string("select")
    ]
    let span = LDObserve.shared.startSpan(name: "database_query", attributes: attributes)
    // Perform your operation
    performDatabaseQuery()
    // Optionally add more attributes during execution
    span.setAttribute(key: "rows_returned", value: .int(42))
    // Always end the span
    span.end()
    

    Using span builder for advanced tracing

    Advanced span usage

    // Get span builder for more control
    let spanBuilder = LDObserve.shared.spanBuilder(spanName: "complex_operation")
      .setSpanKind(spanKind: .client)
    
    spanBuilder.setAttribute(key: "user.id", value: .string("12345"))
    let span = spanBuilder.startSpan()
    // Make the span current for nested operations
    span.makeCurrent()
    // Perform work that might create child spans
    performComplexWork()
    span.end()
    

    Session management

    The observability plugin automatically manages sessions and handles application lifecycle events. Sessions are automatically ended when:

    • The application is backgrounded for longer than the configured sessionTimeout (default: 30 minutes)
    • The application is terminated
    • A new session is explicitly started

    Session timeout configuration

    You can configure how long the plugin waits before ending a session when the app goes to the background:
    Configure session timeout

    let configuration = Configuration(
      sessionTimeout: 45 * 60 // 45 minutes in seconds
    )
    

    Automatic instrumentation

    The observability plugin automatically instruments your iOS application to collect:

    • Application lifecycle events: App start, foreground, background, and termination
    • Session tracking: Automatic session start and end events with timing
    • Network requests: HTTP request/response data when enabled
    • LaunchDarkly SDK events: Feature flag evaluations and SDK operations

    Session Replay

    Session Replay is in Early Access
    Session Replay is available in Early Access. APIs are subject to change until a 1.x version is released.
    Session Replay captures user interactions and screen recordings to help you understand how users interact with your application. Session Replay works as an additional plugin that requires the observability plugin to be configured first.

    Install the Session Replay plugin

    First, add the Session Replay package as a dependency alongside the observability plugin:
    Package.swift, using Swift Package Manager

    //...
    dependencies: [
    .package(url: "https://github.com/launchdarkly/ios-client-sdk.git", .upToNextMinor("9.0.0")),
    .package(url: "https://github.com/launchdarkly/swift-launchdarkly-observability.git", .upToNextMinor("1.0.0")),
    ],
    targets: [
    .target(
    name: "YOUR_TARGET",
    dependencies: [
    "LaunchDarkly",
    .product(name: "LaunchDarklySessionReplay", package: "swift-launchdarkly-observability")
    ]
    )
    ],
    //...
    

    Then, import the Session Replay plugin into your code:

    import LaunchDarkly
    import LaunchDarklyObservability
    import LaunchDarklySessionReplay
    

    Initialize Session Replay

    To enable Session Replay, add the SessionReplay plugin to your SDK configuration alongside the Observability plugin. The Observability plugin must be added before the SessionReplay plugin:
    iOS SDK v9.14+ with Session Replay

    let mobileKey = "mobile-key-123abc"
    let config = LDConfig(
      mobileKey: mobileKey,
      autoEnvAttributes: .enabled
    )
    config.plugins = [
      // Observability plugin must be added before SessionReplay
      Observability(options: .init(
        serviceName: "ios-app",
        sessionBackgroundTimeout: 3)),
      SessionReplay(options: .init(
        isEnabled: true,
        privacy: .init(
          maskTextInputs: true,
          maskWebViews: false,
          maskImages: false,
          maskAccessibilityIdentifiers: ["email-field", "password-field"]
        )
      ))
    ]
    
    let contextBuilder = LDContextBuilder(key: "context-key-123abc")
    guard case .success(let context) = contextBuilder.build() else { return }
    
    LDClient.start(
      config: config,
      context: context,
      startWaitSeconds: 5.0,
      completion: { (timedOut: Bool) -> Void in
        if timedOut {
          // Client may not have the most recent flags for the configured context
        } else {
          // Client has received flags for the configured context
        }
      }
    )
    

    Configure Session Replay privacy options

    The Session Replay plugin provides privacy options to control what data is captured. Configure these options when initializing the plugin:
    Session Replay privacy options

    SessionReplay(options: .init(
      isEnabled: true,
      serviceName: "my-swift-app",
      privacy: .init(
        maskTextInputs: true,
        maskWebViews: false,
        maskLabels: false,
        maskImages: false,
        maskUIViews: [SensitiveView.self],
        ignoreUIViews: [PublicView.self],
        maskAccessibilityIdentifiers: ["email-field", "password-field"],
        ignoreAccessibilityIdentifiers: ["public-label"],
        minimumAlpha: 0.02
      )
    ))
    

    Privacy configuration options

    The PrivacyOptions struct provides the following parameters:

    • maskTextInputs: Mask all text input fields. Defaults to true.
    • maskWebViews: Mask the contents of web views (WKWebView and UIWebView). When this setting is enabled, web views are rendered as blank rectangles in session replays. Defaults to false.
    • maskLabels: Mask all text labels. Defaults to false.
    • maskImages: Mask all images. Defaults to false.
    • maskUIViews: Array of UIView classes to automatically mask in recordings.
    • ignoreUIViews: Array of UIView classes to exclude from masking rules.
    • maskAccessibilityIdentifiers: Array of accessibility identifiers to mask. Use this to mask specific UI elements by their accessibility identifier.
    • ignoreAccessibilityIdentifiers: Array of accessibility identifiers to exclude from masking rules.
    • minimumAlpha: Minimum alpha value for view visibility in recordings. Views with alpha below this threshold are not captured. Defaults to 0.02.

    Fine-grained masking control

    You can override the default privacy settings on individual views using the .ldPrivate() and .ldUnmask() methods. This allows precise control over what is captured in session replays.

    SwiftUI view masking

    Use view modifiers to control masking for SwiftUI views:

    import SwiftUI
    import SessionReplay
    
    struct ContentView: View {
      @State private var email = ""
      @State private var shouldMaskEmail = true
    
      var body: some View {
        VStack {
          // Mask this specific view
          Text("Sensitive information")
            .ldPrivate()
    
          // Unmask this view (even if it would be masked by default)
          Image("profile-photo")
            .ldUnmask()
    
          // Conditionally mask based on a flag
          TextField("Email", text: $email)
            .ldPrivate(isEnabled: shouldMaskEmail)
        }
      }
    }
    

    UIKit view masking

    Use the .ldPrivate() and .ldUnmask() methods on UIView instances:

    import UIKit
    import SessionReplay
    
    class CreditCardViewController: UIViewController {
      let cvvField = UITextField()
      let nameField = UITextField()
      let cardNumberField = UITextField()
    
      override func viewDidLoad() {
        super.viewDidLoad()
    
        // Mask the CVV field
        cvvField.ldPrivate()
    
        // Unmask the name field (even if text inputs are masked by default)
        nameField.ldUnmask()
    
        // Conditionally mask based on a flag
        cardNumberField.ldPrivate(isEnabled: true)
      }
    }
    

    Explore supported features

    The observability plugins supports the following features. After the SDK and plugins are initialized, you can access these from within your application:

    • Configuration for client-side observability
    • Errors
    • Logs
    • Metrics
    • Tracing

    Review observability data in LaunchDarkly

    After you initialize the SDK and observability plugin, your application automatically starts sending observability data back to LaunchDarkly in the form of custom events and OpenTelemetry data. You can review this information in the LaunchDarkly user interface. To learn how, read Observability.
    The observability data collected includes:

    • Error monitoring: Unhandled exceptions, crashes, and manually recorded errors with stack traces
    • Logs: Application logs with configurable severity levels and custom attributes
    • Traces: Distributed tracing data including span timing, nested operations, and custom instrumentation
    • Metrics: Performance metrics, custom counters, histograms, and gauge measurements
    • Session data: User session information including lifecycle events and timing
      Specifically, the observability data includes events that LaunchDarkly uses to automatically create the following metrics:
    • User error rate and crash frequency
    • Application performance metrics (launch time, session duration)
    • Feature flag evaluation context and timing
    • Custom business metrics recorded through the SDK
      To learn more about autogenerated metrics, read Metrics autogenerated from observability events.
    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    Guarded rollout errors

    LaunchDarkly rolls out early access observability for guarded rollouts, including regression detection and a built‑in User error rate metric. Enable observability SDKs and view regressions in the flag Targeting or Monitoring tabs to pinpoint front‑end errors.

    This feature is in Early Access

    LaunchDarkly’s observability features are publicly available in early access.
    They require the LaunchDarkly observability SDKs and the JavaScript, React Web, or Vue SDK.
    If you are interested in participating in the Early Access Program for our upcoming observability plugins for server-side SDKs, sign up here.

    Guarded rollouts availability

    All LaunchDarkly accounts include a limited number of guarded rollouts. Use these to evaluate the feature in real-world deployments.

    Overview

    This topic explains how to investigate and debug front-end user errors in your guarded rollouts.
    When you start a guarded rollout, LaunchDarkly detects if the change is having a negative impact on your audience. This negative effect is called a “regression.” You can use LaunchDarkly debugging tools to understand and mitigate the source of certain front-end errors observed during the regression.

    Metric availability

    The debugging feature is available only for autogenerated metrics with the event key $ld:telemetry:error. LaunchDarkly automatically creates this metric, typically named “User error rate (LaunchDarkly),” though the name may differ depending on your account setup.
    This autogenerated metric is available through LaunchDarkly observability SDKs. You can enable these when you initialize the LaunchDarkly client in your app. To learn how, read about LaunchDarkly’s observability SDKs.
    After you have configured the SDK, you must add the “User error rate (LaunchDarkly)” metric to your guarded rollout when you create the rollout to use the debugging feature.

    View regressions

    To view regressions on a guarded rollout, navigate to the flag’s Targeting or Monitoring tabs. If the guarded rollout is tracking the “User error rate (LaunchDarkly)” metric, and if the new variation produces more errors than the old variation, then LaunchDarkly detects the regression and a “User error rate (LaunchDarkly) regressed” message appears.
    Click Show errors to expand the metric errors list:
    A guarded rollout with the error panel expanded.
    Click on the error name to navigate to the error group related to that error. To learn more, read View errors.

    Original source Report a problem
  • Oct 30, 2025
    • Parsed from source:
      Oct 30, 2025
    • Detected by Releasebot:
      Oct 31, 2025
    LaunchDarkly logo

    LaunchDarkly

    Changelog: LLM Observability, Experimentation Discussions and Heatmaps!

    LaunchDarkly introduces LLM Observability for production AI apps, offering performance and semantic insights, heatmaps, and experiment discussions for cross‑team collaboration. The release adds Developer Toolbar v1.0, UI tweaks, tracing hooks, logs, guardrails, and bug fixes to boost observability and reliability.

    LLM Observability

    LLM Observability gives teams visibility into how GenAI applications behave in production. It tracks not only performance metrics like latency and error rates, but also semantic details, including prompts, token usage, and responses. With LaunchDarkly’s LLM Observability, you can debug, monitor, and improve both the performance and quality of your AI-driven features.

    Observability Heatmaps

    Heatmaps surface where users engage most (or least), revealing usability issues and validating design decisions. They turn raw behavioral data into actionable insights for product, design, and growth teams—without requiring deep analytics setup.
    LaunchDarkly heatmaps are a visualization layer that aggregates user interactions, like taps, clicks, or scrolls, into color-coded intensity maps over your app or website.
    Get started by installing the web session replay plugin to create your first heatmap (dashboards page).

    Experiment Discussions

    Experiment discussions bring contextual collaboration directly into LaunchDarkly experiments. Product, engineering, and data teams can plan, monitor, and analyze experiments with built-in comments to help connect the dots. All stakeholders can now discuss experiment setup, results, and decisions right alongside the experiment data, alleviating the need to jump between external tools.

    Other improvements

    • Released v1.0 of the Developer Toolbar, which allows for easy adoption regardless of frontend stack
    • New date range picker added to Observability screens
    • New c++ tracing hook for Observability traces
    • New log rows detail and search viewer
    • New event count display for Event Explorer
    • User-defined and warehouse-native categories added to the metric selection menu
    • Judge variations added to AI Config flag variations
    • Health checks added to progressive rollouts
    • New columns added to guarded releases store
    • Support infinite critical environments
    • Updated flag lifecycle flag status calculations

    Bug fixes

    • Empty state added for when session is missing
    • Updated secondary metric for time to completion
    • Fixed flag audience variation picker max width
    • Fixed variation exposures display
    • Fixed AI Configs targeting page caching issue
    • Fixed funnel steps not being added upon selection
    Original source Report a problem
  • Oct 16, 2025
    • Parsed from source:
      Oct 16, 2025
    • Detected by Releasebot:
      Oct 17, 2025
    LaunchDarkly logo

    LaunchDarkly

    Changelog: Developer toolbar beta, AI SDK provider packages, and faster experimentation

    LaunchDarkly rolls out Public Beta for the Developer Toolbar, enabling local development with real‑time flag insights and easy local overrides. It also debuts AI SDK packages with provider‑specific support, a TrackedChat example, Experiment cloning across environments, and Metrics and Attributes Anytime for mid‑experiment updates.

    LaunchDarkly Developer Toolbar in Public Beta

    The LaunchDarkly Developer Toolbar is now available in public beta! The toolbar lets you use LaunchDarkly directly in your local development workflow—no need to switch between your IDE and the LaunchDarkld UI.
    You can view feature flags in use, override flag values locally, and listen for incoming events like flag evaluations in real time. You can also identify missing flags and link directly to your LaunchDarkly project to create them.
    This is a big step toward making it easier for you to develop and test with LaunchDarkly locally.
    Get started with the Developer Toolbar documentation

    LaunchDarkly AI SDK (JavaScript): new provider-specific packages

    Three new AI provider-specific packages are now available for the LaunchDarkly AI SDK in JavaScript:

    • @launchdarkly/server-sdk-ai-openai
    • @launchdarkly/server-sdk-ai-langchain
    • @launchdarkly/server-sdk-ai-vercel

    These packages provide reliable version management and let you work seamlessly within your preferred AI frameworks. A new TrackedChat example also shows how to build conversational AI with automatic provider switching.
    These packages are in early access, require Node.js 16+, provider-specific dependencies, and API keys for each provider.
    AI SDK documentation
    TrackedChat example

    Experiment cloning

    You can now clone experiments across environments in LaunchDarkly Experimentation! Easily copy an existing experiment design from one environment to another without rebuilding it from scratch.
    This highly requested feature makes it faster and easier to run consistent experiments across multiple environments, reducing setup time and effort.
    Learn more in the Experiment cloning documentation.

    Metrics and Attributes Anytime

    Experimentation customers can now add metrics or attributes to an ongoing experiment—no need to restart or recreate it.
    This update gives teams more flexibility and removes the pain of re-randomization when they want to track new measures or slice results by additional attributes mid-experiment.
    Metrics and Attributes Anytime is available for hosted experimentation in all regions (excluding Snowflake-native experiments).
    See our documentation for Metrics and Attributes in Experiments for more details

    Other improvements

    • Improved experiment decision summary in product and exported reports
    • New checkout flow for trials
    • Enhanced guarded rollout metrics
    • New observability store for monitoring

    Bug fixes

    • Resolved analytics tab loading issues on experiment pages
    • Fixed cohorts page dimension filter issue
    • Multiple navigation and UI overflow fixes
    Original source Report a problem
  • Oct 2, 2025
    • Parsed from source:
      Oct 2, 2025
    • Detected by Releasebot:
      Oct 3, 2025
    LaunchDarkly logo

    LaunchDarkly

    Changelog: Improved search experience, experiment health checks, PDF export, and more...

    LaunchDarkly unveils a faster in‑product search with cmd/ctrl+K, plus new experiment health checks and PDF export for results. AI/observability tweaks, enhanced experiment filters, and an archive action streamline workflows, with targeted bug fixes for smoother operation.

    New In-Product Search Experience

    LaunchDarkly now has an improved in-product search experience. The familiar cmd+K / ctrl+K shortcut now launches a sleeker, faster, and more reliable search modal. Users will enjoy the same functionality they’re used to—like searching across projects, environments, flags, and docs—plus new additions such as quick access to keyboard shortcuts and a one-click create-flag option. This update lays the foundation for even more powerful search and action capabilities in the future.
    To access the more powerful search functionality, user the cmd+K / ctrl+k keyboard command.

    Experiment Health Checks

    We’ve added automated Health Checks directly into Experiments to give you confidence that your experiment is set up and running as expected. With Health Checks, LaunchDarkly automatically reviews your experiment’s configuration and notifies you if anything needs attention. Each alert comes with clear, actionable documentation so you can quickly understand the issue and take the right steps. Detect issues earlier, react easily, and get your experiments running faster!
    Learn more in the Experiment health checks documentation

    Experiment PDF Export

    Easily share the results of your experiment with anyone as a PDF. In LaunchDarkly Experimentation, just click "Download PDF" on the experiment results tab to download a sleek, formatted, data-rich PDF that outlines the details of your experiment - complete with key takeaways and experiment results.

    Other improvements

    • Added LLM trace support to Observability traces
    • Added claude-sonnet-4-5 to the global AI model configurations for AI Configs
    • Added process context attributes as an experiment filter option for experiments and holdouts
    • Implemented an action menu + archive action on the Test Runs previous runs UI for AI Configs

    Bug fixes

    • Added "Duration" column to the ExperimentDetailsIterarionsCard table
    • Fixed Approvals UI for custom approvals
    • Fixed “declined” error status so it appropriately shows the user what failed
    • Updated iteration details to use correct iteration analysis
    Original source Report a problem
  • Sep 18, 2025
    • Parsed from source:
      Sep 18, 2025
    • Detected by Releasebot:
      Sep 27, 2025
    LaunchDarkly logo

    LaunchDarkly

    Changelog: Release Policies (Early Access), Observability in self-serve, and Logs and Traces improvements

    LaunchDarkly rolls out Release Policies in Early Access to simplify default releases, adds Observability in self-serve plans, and ships major Logs & Traces improvements (Live Mode, editable graphs) plus UI refreshes and bug fixes.

    We’ve introduced Release Policies in Early Access to make Guardian easier to use as the default release method, one of the top requests from Guardian customers.
    The first version lets you:

    • Define a scope for a policy (currently environments, with more options coming)
    • Set a preferred release method (for example: “On production, Guarded Releases is the default”)
    • Choose automatic or manual rollbacks
    • Configure minimum sample size requirements (helpful for lower-traffic environments)

    Release Policies will expand over the next few months alongside Guardrail Metrics, Global Metric Thresholds, and Release Templates, working toward our goal of zero-configuration Guardian rollouts.
    Early Access is limited; reach out to your account team if you’d like to participate.

    Observability now available in self-serve plans

    Observability (Session Replays, Errors, Logs, Traces) is now included in Foundation self-serve plans on a usage-based model.
    New customers can now add Observability directly in checkout — no sales conversation required — and existing customers will also have access.
    Highlights:

    • New trial onboarding flow for observability plug-ins and SDK setup
    • Updated pricing page with a usage calculator
    • Checkout support for usage-based or annual commit billing
      With Observability in self-serve plans, teams can quickly debug issues, monitor performance, and get more value from feature flags — all without leaving LaunchDarkly.
      See the documentation to get started, or if you are self-serve customer, add observability in the checkout

    Logs and Traces improvements

    We’ve shipped two major updates to make debugging and analysis in Logs and Traces faster and more powerful:

    • Live Mode: stream events in real time, no manual refresh required. Perfect for validating new instrumentation or monitoring fast-moving incidents.
    • Editable graph views: create and edit graphs directly in Logs and Traces without setting up a dashboard first. Switch graph types, run aggregations, and add up to two custom graphs per view.

    Together, these updates help you move from “something looks off” to “here’s what’s happening and where” with fewer clicks, less overhead, and more confidence in what you’re seeing.
    Learn more in the Logs and Traces documentation.

    Other improvements

    • Refreshed the Metric Groups UI with standardized components for a cleaner experience
    • Added a new metric details tab showing configuration and connections upfront
    • Introduced a tabbed connections table that splits connections across Experiments, Guarded Releases, and Metric Groups
    • Multi-armed bandits dashboard refresh
    • Refinements to the holdout builder and results view for Experimentation
    • Improved metric accuracy by including events only from the selected environment

    Bug fixes

    • Updated error notifications for context kind creation
    • Fixed a bug with newly created rules in AI Configs targeting
    • Fixed display of empty stack traces
    • Fixed metric preview edge cases where event data could show incomplete results
    Original source Report a problem
  • Sep 4, 2025
    • Parsed from source:
      Sep 4, 2025
    • Detected by Releasebot:
      Sep 27, 2025
    LaunchDarkly logo

    LaunchDarkly

    Changelog: Relative difference charts, custom metric filters, AI Config approval updates, and more

    Guarded Releases adds relative difference charts, custom-metric filtering, smarter OTel filtering, AI Config approvals, and multiple comparison corrections. Also brings a System theme, Scoped Go SDK, an experimentation sandbox, visualization tweaks, and bug fixes.

    Relative difference charts

    We’ve added relative difference charts to Guarded Releases, making it easier to see how metrics shift during a rollout. Instead of abstract probabilities, you’ll now see a direct comparison of your metric across control and treatment variations, answering the real question:
    “Did errors go up compared to before?”
    This view is more intuitive, familiar to analysts and engineers, and makes Guardian rollouts easier to understand or explain to your team.

    Filters for custom metric events

    We’ve made it simple to measure what matters. You can now filter custom metric events by event properties or context attributes directly in LaunchDarkly. For example: instead of instrumenting a separate error event for each page type, you can send one (error_event) with (page_type) as a property and then define distinct metrics per page right in the LaunchDarkly app.
    This update simplifies instrumentation, speeds up metric creation, and adds flexibility across both Release Guardian and Experimentation.Check out the documentation or blog post to get started.

    Smarter data filtering in OTel and Observability SDKs

    Sending all observability data can be expensive and noisy. With inExperiment filtering, you can now configure your OTel collector or Observability SDKs to send data only for flags running a Guarded Release or Experiment. This reduces cost, avoids dual-shipping unnecessary data, and makes monitoring more efficient at scale.
    Recommended collector configuration is available in our documentation.

    Approvals updates for AI Configs

    You can now require reviews and approvals for changes to AI Config variations and targeting rules before they go live. Approvals help keep AI behavior safe, transparent, and aligned with team standards. They make it easier to collaborate with teammates, add accountability to high-stakes updates, and stay informed with notifications on requests and decisions.
    Get started with AI Config Approvals in our documentation

    Multiple comparison corrections

    Experimentation results are now more statistically rigorous. LaunchDarkly automatically applies multiple comparison corrections when analyzing experiments with many metrics or variations. This reduces false positives and ensures your reported wins are reliable even in complex tests. Now teams can move faster with confidence, knowing their decisions are backed by best-practice statistical methods.
    Get started with multiple comparison corrections in our guide.

    Other improvements

    • Added a new “System” theme option that auto-matches your operating system preference
    • Introduced Scoped Clients in the Go SDK for more flexible client management
    • Released Experimentation demo sandbox for safer, hands-on testing
    • Optionally hide child spans in the Traces view
    • Configurable visualization types in Logs and Traces views
    • Display badge for external data sources on metrics list

    Bug fixes

    • Fixed display of roles with nil base permissions in UI
    • Fixed a bug with newly created rules in AI Configs targeting
    • Fixed display of empty stack traces
    • Fixed unintentional text wrapping on clone flag dialog
    Original source Report a problem

Related vendors