LaunchDarkly Release Notes

Last updated: Apr 7, 2026

  • April 2026
    • No date parsed from source.
    • First seen by Releasebot:
      Apr 7, 2026
    LaunchDarkly logo

    LaunchDarkly

    Restoring previous flag versions

    LaunchDarkly adds the ability to restore a feature flag to a previous version from change history, with code diffs, previews, and safeguards for identical versions and older or restricted states.

    Overview

    This topic explains how to use the change history tab to restore a feature flag to a previous version.

    Flag versions in change history

    You can change a feature flag to a previous version.

    Versions increment based on your actions. For example, if you are on version 5 and restore version 3, version 3 is brought forward and becomes version 6. The restore workflow shows version numbers, code diffs, and a preview of the new version. You cannot restore a version that is identical to the current version. If you try to restore an identical version, the diff is abbreviated to only show the updated timestamps and version numbers, and restoring is disabled.

    To restore a previous version, visit the change history page.

    Limitations

    Flag versions are limited to flag configuration changes in the current environment, not flag variations or global flag settings. There are additional restrictions if a flag is part an experiment, rollout, or scheduled change. Here is a list of limitations for restoring previous flag versions:

    • You can only restore states that existed within the last 30 days
    • You can’t restore a version with an expired target date. For example, if the current date is June 1, 2026 and you attempt to restore a version that had a change scheduled to take effect on May 25, 2026, you cannot restore this version because the date of the scheduled change has already passed.
    • You can’t roll back to states controlled by experiments, guarded rollouts, or progressive rollouts

    Restore a previous flag version

    Here’s how to restore a previous flag version:

    1. Navigate to the flag’s change history page and find the version you want to restore. Versions that can be restored are indicated with a button that has “Restore previous version” hover text.
    2. Click the version restore button in the row for the version you wish to restore. A code diff appears showing the changes between the current version and the version you intend to restore.
    3. Verify that the changed version does what you expect and click Stage this version. A preview appears.
    4. Click Review and save. A confirmation dialog appears.
    5. Type the environment’s name to confirm and click Save changes.

    The earlier flag version is now the current version.

    Original source Report a problem
  • March 2026
    • No date parsed from source.
    • First seen by Releasebot:
      Mar 17, 2026
    LaunchDarkly logo

    LaunchDarkly

    Datadog Agent ingestion

    LaunchDarkly releases early access observability integration with Datadog, enabling traces, metrics and logs via OpenTelemetry Collector. The guide covers enabling in UI, Datadog Agent configs, dual shipping, and attaching flag context to traces for guarded rollouts.

    This feature is in Early Access
    LaunchDarkly’s observability features are publicly available in early access. Enable observability in the billing page.
    They currently require the LaunchDarkly
    observability SDKs
    and the JavaScript, React Web, or Vue SDK.
    If you are interested in participating in the Early Access Program for our upcoming observability plugins for server-side SDKs,
    sign up here
    .

    Overview

    This topic explains how to send traces, metrics, and logs from the
    Datadog Agent
    to LaunchDarkly’s observability features.
    LaunchDarkly’s OpenTelemetry Collector includes a Datadog-compatible receiver. If you already run the Datadog Agent in your infrastructure, you can configure it to send APM traces, metrics, and logs to LaunchDarkly without changing your application instrumentation. Once received, this telemetry is available in the
    Traces
    ,
    Logs
    , and observability dashboards in the LaunchDarkly UI.

    Prerequisites

    Before you configure Datadog Agent ingestion, you must:

    • Have LaunchDarkly observability enabled for your project
    • Have the
      Datadog Agent
      v6.0 or later installed and running in your infrastructure
    • Know your LaunchDarkly client-side ID
      To find your client-side ID:
      • In the LaunchDarkly UI, click the project dropdown to open the project menu.
      • Select
        Project settings
      • Click
        Environments
      • Find the environment you want to use and copy the
        Client-side ID
        value.

    Configure the Datadog Agent

    To send telemetry to LaunchDarkly, configure the Datadog Agent to use LaunchDarkly’s Datadog-compatible endpoint. You can replace the standard Datadog intake, or send data to both Datadog and LaunchDarkly simultaneously using dual shipping.

    Endpoint:
    otel.observability.app.launchdarkly.com:8126

    Configure using the agent configuration file

    To configure the Datadog Agent using the
    datadog.yaml
    configuration file, include these lines:

    datadog.yaml configuration
    apm_config:
      enabled: true
      apm_dd_url: http://otel.observability.app.launchdarkly.com:8126
    
    logs_config:
      logs_dd_url: otel.observability.app.launchdarkly.com:8126
      use_http: true
    
    dd_url: http://otel.observability.app.launchdarkly.com:8126
    

    Configure using environment variables

    If you run the Datadog Agent in a container, you can use environment variables to provide the configuration:
    Environment variable configuration
    $ DD_APM_ENABLED=true
    $ DD_APM_DD_URL=http://otel.observability.app.launchdarkly.com:8126
    $ DD_DD_URL=http://otel.observability.app.launchdarkly.com:8126

    Configure using Docker Compose

    If you run the Datadog Agent as a Docker container, add the configuration to your
    docker-compose.yml
    file:

    docker-compose.yml configuration
    services:
      datadog-agent:
        image: gcr.io/datadoghq/agent:latest
        environment:
          - DD_APM_ENABLED=true
          - DD_APM_DD_URL=http://otel.observability.app.launchdarkly.com:8126
          - DD_DD_URL=http://otel.observability.app.launchdarkly.com:8126
          - DD_API_KEY=placeholder
    

    DD_API_KEY is required
    The Datadog Agent requires a
    DD_API_KEY
    value to start, even when routing to a non-Datadog endpoint. You can use any non-empty placeholder string. LaunchDarkly does not validate or use this value.

    Sending data to both Datadog and LaunchDarkly

    The examples above replace the default Datadog intake with LaunchDarkly’s endpoint. If you want to continue sending data to Datadog while also sending it to LaunchDarkly, you can configure the Datadog Agent to dual ship telemetry to both destinations.
    Dual shipping lets you keep your existing Datadog dashboards, alerts, and workflows while also using LaunchDarkly’s observability features and guarded rollouts.
    To learn how to configure dual shipping, read
    Dual Shipping
    in the Datadog documentation.
    When configuring dual shipping, use
    http://otel.observability.app.launchdarkly.com:8126
    as the additional endpoint for APM traces, metrics, and logs.

    Associate telemetry with your LaunchDarkly project

    LaunchDarkly uses the
    launchdarkly.project_id
    resource attribute to route telemetry to the correct project. Set this to your LaunchDarkly client-side ID.
    You can set this attribute in your application’s OpenTelemetry SDK configuration, or use the
    OTEL_RESOURCE_ATTRIBUTES
    environment variable for services that send data through the agent:
    Environment variable
    export OTEL_RESOURCE_ATTRIBUTES="launchdarkly.project_id=YOUR_CLIENT_SIDE_ID"
    Alternatively, if you use the
    OpenTelemetry Collector
    in front of the Datadog Agent, you can inject this attribute using a
    resource
    processor. To learn more, read
    OpenTelemetry in server-side SDKs
    .

    Attaching feature flag context to traces

    To use Datadog trace data with LaunchDarkly features like
    guarded rollouts
    and
    autogenerated metrics
    , your traces must include feature flag evaluation data. LaunchDarkly uses this data to correlate traces with specific flag evaluations and contexts.
    You can attach feature flag context to your Datadog traces by creating a custom
    SDK hook
    . The hook instruments each flag evaluation as a child span on your existing Datadog traces, adding attributes that LaunchDarkly uses to connect traces to flag evaluations.
    The hook must set the following span attributes on each flag evaluation:

    • Attribute | Description
    • feature_flag.key
      • The key of the evaluated flag.
    • feature_flag.context.id
      • The fully-qualified key of the LaunchDarkly context.
    • feature_flag.contextKeys
      • A JSON object mapping each context kind to its key, for example {"user":"user-123"}.
    • feature_flag.provider.name
      • Set to LaunchDarkly.
    • feature_flag.result.value
      • The string representation of the evaluation result.

    Go example

    This example uses a
    BeforeEvaluation
    hook to start a Datadog span with flag metadata, and an
    AfterEvaluation
    hook to record the result and finish the span.
    Expand Go hook implementation

    Configuring the Datadog tracer

    Configure the Datadog tracer in your application to send traces to LaunchDarkly’s Datadog-compatible endpoint. Set the
    X-LaunchDarkly-Project
    global tag to your LaunchDarkly project ID so that traces are routed to the correct project:
    Go tracer configuration

    import "github.com/DataDog/dd-trace-go/v2/ddtrace/tracer"
    ...
    tracer.Start(
      tracer.WithAgentURL("http://otel.observability.app.launchdarkly.com:8126"),
      tracer.WithService("your-service-name"),
      tracer.WithEnv("production"),
      tracer.WithGlobalTag("X-LaunchDarkly-Project", "YOUR_PROJECT_ID"),
    )
    defer tracer.Stop()
    

    Using flag data with guarded rollouts

    After you configure the hook and tracer, LaunchDarkly automatically processes your Datadog trace data and associates it with the feature flags and contexts in your evaluations. This enables you to:

    • Use
      autogenerated OpenTelemetry metrics
      , such as HTTP error rates and latencies, with
      guarded rollouts
    • Monitor how flag changes impact your application performance in
      Traces
    • Correlate errors and latency regressions with specific flag evaluations and context attributes
      To learn more about setting up guarded rollouts, read
      Creating guarded rollouts
      .

    What data is collected

    The Datadog Agent sends the following telemetry types to LaunchDarkly:

    • Traces
      : APM traces from your instrumented services, visible in
      Traces
    • Metrics
      : Infrastructure and application metrics
    • Logs
      : Application logs collected by the agent log collection feature, visible in
      Logs

    Verify that data is being received

    After you configure the Datadog Agent, telemetry begins flowing to LaunchDarkly.
    To verify that traces are being received:

    • In the LaunchDarkly UI, expand
      Observe
      in the left navigation.
    • Click
      Traces
    • Look for traces from your Datadog-instrumented services.
      To verify that logs are being received:
    • In the LaunchDarkly UI, expand
      Observe
      in the left navigation.
    • Click
      Logs
    • Look for logs from your services.
      It may take a few minutes for data to appear after you first configure the agent.

    Filtering ingested data

    You can configure ingestion filters to control which logs are stored in LaunchDarkly. This is useful for reducing noise or staying within your observability quotas.
    To learn more, read
    Filters
    .

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from LaunchDarkly and hundreds of other software products.

  • March 2026
    • No date parsed from source.
    • First seen by Releasebot:
      Mar 17, 2026
    LaunchDarkly logo

    LaunchDarkly

    Guarded rollouts

    LaunchDarkly introduces guarded rollouts with absolute-difference metrics, automatic rollback on regression, and minimum context checks for flag and AI Config changes. It adds monitoring tiles, clearer regression alerts, and guidance on viewing active guarded rollouts and metrics integrations.

    Guarded rollouts availability

    Guarded rollouts are available to customers on a Guardian plan. To learn more, read about our pricing. To upgrade your plan, contact Sales.

    All LaunchDarkly accounts include a limited trial of guarded rollouts. Use this to evaluate the feature in real-world releases.

    Overview

    This topic explains how to monitor metrics on flag and AI Config releases and configure LaunchDarkly to take action on the results.

    An active guarded rollout on a flag change.

    When you begin serving a new flag or AI Config variation, such as when you toggle a flag on or update the default rule variation, you can add a guarded rollout. A guarded rollout progressively increases traffic to the new variation while monitoring selected metrics for regressions until the rollout reaches 100%. If LaunchDarkly detects a regression before the rollout reaches 100%, it can pause the rollout and sends a notification.

    In a guarded rollout, each metric appears in its own tile. Each tile includes a difference chart that shows how the new variation compares to the original variation over time. The dark grey line represents the absolute difference, and the shaded grey area represents the absolute difference’s confidence interval.

    LaunchDarkly identifies a regression when sequential testing determines that the absolute difference represents a statistically significant negative impact on a monitored metric.

    On the chart, this occurs when the confidence interval falls entirely on the side of worse performance based on the metric’s success criteria. For lower-is-better metrics, the confidence interval lies above zero. For higher-is-better metrics, the confidence interval lies below zero.

    Legacy relative difference

    Previous versions of guarded rollouts supported relative difference, which measured change as a percentage relative to the original variation. Guarded rollouts now use absolute difference for all analyses. Relative difference is no longer supported.

    When a regression is detected, the metric tile highlights the regression. If automatic rollback is enabled, LaunchDarkly also rolls back the release.

    Minimum context requirement for guarded rollouts

    A new flag or AI Config variation must be evaluated by a minimum number of contexts during each step of a guarded rollout. If this requirement is not met, LaunchDarkly automatically rolls back the change.

    Guarded rollouts are one of several options that LaunchDarkly provides to help you release features safely and gradually. To learn about other release options, read
    Releasing features with LaunchDarkly.

    You can create a guarded rollout on any targeting rule, as long as no other guarded rollouts, progressive rollouts, or experiments are running on the flag or AI Config, and the flag is not a migration flag.

    View flags with guarded rollouts

    To view flags that currently use or previously used a guarded rollout, click
    Guarded rollouts in the left navigation.

    Use the
    Filters menu to filter the list by rollout status. Navigate to the flag’s
    Monitoring tab to
    view and manage the rollout.

    AI Configs with guarded rollouts do not appear on the
    Guarded rollouts list.

    Metrics and guarded rollouts

    Metrics track system health indicators and end-user behavior, such as errors, latencies, clicks, and conversions. When you attach metrics to a flag or AI Config change, you can measure how the new variation affects those metrics during the rollout.

    You can connect metrics to LaunchDarkly in several ways:

    Use one of our
    metrics integrations.

    Call the
    metric import API.

    Use a LaunchDarkly SDK to
    send custom events and
    connect them to metrics.

    Enable OpenTelemetry in a LaunchDarkly SDK and send traces to LaunchDarkly to
    autogenerate metrics.

    To learn more, read
    Metrics.

    Regressions

    When you attach metrics to a flag or AI Config and start a guarded rollout, LaunchDarkly compares the performance of the new variation to the original variation.

    A regression is a statistically significant negative impact on a monitored metric. Release Guardian determines this by measuring the absolute difference between the new and original variations and applying sequential testing.

    You can configure LaunchDarkly to notify you of a regression, or to notify you and automatically roll back the release when a regression is identified.

    To learn how to investigate regressions in your guarded rollouts, read
    Guarded rollout errors.

    Original source Report a problem
  • Jan 13, 2026
    • Date parsed from source:
      Jan 13, 2026
    • First seen by Releasebot:
      Jan 23, 2026
    LaunchDarkly logo

    LaunchDarkly

    What's new

    LaunchDarkly bundles new guides on the developer toolbar and experiment methodologies with a .NET SDK 8.11 update. It also adds deep linking for sessions, Android privacy tweaks, and improved testing flag previews.

    Release notes

    • January 13, 2026: Publishes a topic on using the developer toolbar. Affected topics: Using the LaunchDarkly developer toolbar

    • January 12, 2026: Publishes a topic about choosing a statistical methodology for experiments. Affected topics: Choosing a statistical methodology

    • January 12, 2026: Updates the data saving mode EAP topic with information about using the .NET (server-side) SDK version 8.11. Affected topics: Data saving mode

    • January 9, 2026: Adds documentation for deep linking to session search queries and linking to specific sessions by ID with timestamps. Affected topics: Session replay

    • January 9, 2026: Updates Android observability SDK privacy settings: renames maskSensitive to maskBySemanticsKeywords, changes maskText default to false, and removes maskAdditionalMatchers option. Affected topics: Android SDK observability reference, Configuration for session replay

    • January 7, 2026: Updates the testing flag changes topic with information about previewing the percentage of contexts that will receive a variation. Affected topics: Testing changes to flag targeting

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 19, 2025
    LaunchDarkly logo

    LaunchDarkly

    Observability settings

    LaunchDarkly launches early access observability features for sessions, errors, logs and traces. Enable observability from the billing page and tune filtering, rage-click sensitivity, sourcemaps, and auto-resolve with rule-based ingestion controls.

    This feature is in Early Access

    LaunchDarkly’s observability features are publicly available in early access. Enable observability in the billing page.
    They currently require the LaunchDarkly observability SDKs and the JavaScript, React Web, or Vue SDK.
    If you are interested in participating in the Early Access Program for our upcoming observability plugins for server-side SDKs, sign up here.

    Overview

    This topic describes the project-level settings available for sessions, errors, logs, and traces.
    In the left navigation of the LaunchDarkly UI, expand Observe to view them.
    To view or update project-level settings for these features:

    • Click the project dropdown to open the project menu.
    • Select Project settings.
    • Click Observability. The Observability settings page appears.

    The following sections describe the available settings.

    Session settings

    You can configure the following settings for sessions in your project:

    • Excluded users. This setting excludes sessions from particular end users, based on their context key or email address.
    • Rage clicks. These settings adjust the sensitivity for detecting “rage clicks,” or occasions when end users repeatedly click an element in your application, indicating frustration. You can set the Elapsed time, Radius, and Minimum clicks. These settings control whether a search for session replays that uses the has_rage_clicks attribute will return a given session. By default, LaunchDarkly considers end-user activity a rage click when there exists a two-second or longer period in which an end user clicks five or more times within a radius of eight pixels.
      Click Save to save your settings.

    Error settings

    You can configure the following settings for errors in your project:

    • Sourcemaps. If you have uploaded sourcemaps, you can view them here.
    • Auto-resolve stale errors. When enabled, this setting automatically sets the status of an error to “Resolved” after the time period you select.
      Click Save to save your settings.

    Filters

    Filters help you manage the ingestion of sessions, errors, logs, or traces that you send to LaunchDarkly. This is useful if you know certain signals which are not relevant to your application or are not actionable. Any excluded signals do not count against your observability quotas.
    To configure ingestion filters:

    • Navigate to the Observability project settings page.
    • From the Filters section, click Edit next to the type of signal you want to configure.
    • (Optional) Configure filter rules to manage ingestion of sessions, errors, logs, or traces.
    • (Optional) Set the Max ingest per minute. This setting rate limits the maximum number of data points ingested in a one-minute window. For example, you may configure a rate limit of 100 per minute. This lets you limit the number of data points recorded in case of a significant spike in use of your application.
    • Click Save.

    Rule evaluation order

    Rules are evaluated in order, from top to bottom. Drag and drop the rules to reorder them to fit your project’s needs. The first enabled rule that matches the criteria applies its filter operation and rate.

    Rules

    To add a filter rule:

    • Click Add rule.
    • Set a rule name.
    • Review the filter rule operation. Exclusion rules are used for sessions, errors, and logs. Inclusion rules are used for traces. You cannot change these settings.
    • Set a query:
      • Click the Filter… placeholder and select an attribute from the dropdown. For example, you can filter sessions based on active_length.
      • Select an operator from the dropdown. For example, you can filter by greater than, >.
      • Enter a value for your expression. For example, you can enter 8s for eight seconds.
    • Set the rules rate (%). For each signal that LaunchDarkly receives, it makes a randomized decision according to the rules rate whether to apply the include or exclude filter operation.
      • For example, if an exclusion rule has a 20% rate, then 20% of the signals that match the rule’s query are excluded and the remaining 80% are included.
    • Set the rule On or Off to enable or disable the rule.
    • Click Save.

    Records with no matching rules

    If a signal does not match any rules query, then it is included by LaunchDarkly.

    Here is an example of multiple log filter rules:
    An example of multiple log filter rules.

    Here is how rule order controls rule evaluation:

    • Logs with level=ERROR and service_name=example-service are always included, the first rule matches, therefore the second and third rule are not reached.
    • Logs with level=DEBUG and service_name=example-service are always excluded, the first rule is skipped, the second rule matches, therefore the third rule is not reached.
    • Logs with level=INFO and service_name=example-service are 80% excluded and 20% included, the first and second rule are skipped, and the third rule matches.
    • Logs with level=INFO and service_name=new-service are always included, all three rules are skipped, therefore the log is ingested.
    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 16, 2025
    LaunchDarkly logo

    LaunchDarkly

    Meet the new navigation in LaunchDarkly

    LaunchDarkly unveils a cleaner, more focused navigation with collapsible sections, simplified visuals, faster shortcuts, a refined Create action, and improved search. The update reduces visual noise and helps teams move faster, now available to all users.

    Here’s what’s new

    A cleaner, more focused navigation reduces noise and helps you move faster.

    It’s been about a year and a half since we introduced a new and improved LaunchDarkly experience, a major redesign that unified environments, improved navigation, and created more clarity across the app.

    Since then, our platform has expanded, and our navigation has expanded with it. Over time, the number of items and icons multiplied, and things started to feel a little overwhelming. Our customers provided consistent commentary:
    “I can’t see what matters most.”
    “The shortcuts are buried.”
    “It’s powerful, but it’s a lot.” We prioritized a refresh based on this feedback.

    This update makes the navigation cleaner, more focused, and better aligned with how you actually work. It helps keep what’s important in view and gives you back control of your screen real estate.

    Here’s what’s new:

    • Collapsible sections so you can keep open the sections you interact with the most and hide what you don’t. Your layout will stay exactly how you leave it, even on page refresh.
    • Simplified visuals with fewer icons and improved spacing, making it easier to scan the navigation and find what you’re looking for.
    • Shortcuts moved up for faster access to your most frequently visited flags. If you haven’t tried this feature before, Shortcuts let you bookmark filtered views of your flags dashboard for quick access to the flags you work with most.
    • A refined Create action that remains easy to find but no longer competes with key actions on the page.
    • An improved search experience that offers quick, keyboard-friendly access to any part of the platform.

    These changes are now available to all LaunchDarkly users.

    This work builds on the progress we’ve made over the past year and a half. It reduces visual noise, adds flexibility, and helps teams move faster and stay focused. We’re excited for you to experience it, and we’d love to hear what you think. Share your reaction with us at [email protected].

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 12, 2025
    LaunchDarkly logo

    LaunchDarkly

    Flag lifecycle settings

    LaunchDarkly adds configurable flag lifecycle settings to control when flags are archived. Changes apply project-wide and take effect immediately, with Insights showing pre and post-update data. Learn how to set minimum age, prerequisites, variations, and archiving scope.

    Overview

    This topic describes flag lifecycle settings and how to update them. Flag lifecycle settings let you customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived. Archiving flags is a good practice to help clean up flags you no longer need.
    Custom lifecycle settings apply only to critical environments. LaunchDarkly will not evaluate non-critical environments for a flag’s archive-readiness.

    Update lifecycle settings

    When you update the Lifecycle settings for a project, the changes apply to all flags in your project. This means some flags previously considered ready to archive may no longer be ready to archive, and some flags previously considered not yet ready to archive may now be ready. On the Flags list, these changes apply immediately.
    In Launch Insights, data from before you update the settings reflects your older definition, and data from after you update the settings reflects your newer definition.

    To customize flag lifecycle settings:

    • Click the project dropdown. The project menu appears:
      The project menu.
    • Click Project settings.
    • Click Lifecycle settings. The flag lifecycle settings panel appears:
      The flag lifecycle settings panel.
    • Customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived:
      • Select the Minimum flag age. We recommend setting this to at least as long as it takes to release most features in your critical environments.
      • Select Before code removal, there should be no targeting changes for at least to set how long a flag’s targeting rules must be unchanged before the flag is ready to archive.
      • Select Before archiving, there should be no evaluations for at least to set how long a flag should have no evaluations before the flag is ready to archive.
      • Select whether a flag may a prerequisite. We recommend checking the Must not be a prerequisite for other flags checkbox.
      • Select whether flags may serve one or many variations and still be ready archive. We recommend checking the Must be serving one variation checkbox.
      • Select whether the archival checks apply to all flags or only temporary flags. We recommend checking the Must be temporary checkbox.
    • Click Save.

    To reset the flag lifecycle settings, navigate to the flag lifecycle settings panel and select Reset to default.

    Original source Report a problem
  • Dec 9, 2025
    • Date parsed from source:
      Dec 9, 2025
    • First seen by Releasebot:
      Dec 12, 2025
    LaunchDarkly logo

    LaunchDarkly

    Collecting user feedback in your app with feature flags

    LaunchDarkly unveils a built in user feedback tool that ties sentiment and session data to feature flags. The guide walks you through enabling tracking, adding a feedback UI, and viewing results in the dashboard for faster data driven decisions.

    If you’re a builder, you understand that it’s crucial to observe how users respond to new features, interface changes, and experimental variations.
    Feedback is an essential part of the product development lifecycle, helping teams validate decisions and iterate faster. The new user feedback tool makes it convenient and possible for you and your team to enable metric tracking for specific feature flags within minutes.
    This tutorial will guide you to enable and view valuable user feedback metrics directly from the LaunchDarkly dashboard. A sample app will be provided for you to clone and follow along on your own machine, but you can skip over to the “Install the SDK and implement user feedback function” if you prefer to use your own app. This features is available for JavaScript & TypeScript client-side SDKs. React components are also provided for convenient integration.

    Why you should think about this today

    Learning how to track important metrics such as user feedback is often overlooked. It can also be a struggle to connect user sentiment directly to a feature flag or existing experimentation.
    Perhaps your problem might be relying on generic feedback tools that lack context about which features users are experiencing. Ideally you would want to see a list of feedback tied to a feature that has been shipped behind a flag.
    Don’t spend more time guessing where a user dropped off when this new LaunchDarkly feature can provide more contextual feedback and live session replay. The qualitative feedback helps narrow down the decision making process so that your team can ship features faster, catch issues early, or roll back a variation and adjust a specific prompt. It is also a great way to understand how different users behave - whether you’re working with gamers, beta testers, premium users, or internal users.

    Requirements

    • LaunchDarkly account. If you haven’t done so already, create a free account
    • An app that uses client side flags. A sample app will be provided below
    • Modern node version. I would recommend the node version manager (nvm) if you don’t have node installed already
    • Bonus: LaunchDarkly feature flags set up and enabled already

    If you would like to follow along with this tutorial with our starter code, go ahead and clone this project:

    $ git clone [email protected]:launchdarkly-labs/ld-feedback-tutorial.git
    

    Create the feature flag

    If you and your team do not have an existing feature flag already, go to the LaunchDarkly dashboard and create your first feature flag. Give it a name such as “feedback demo”.
    You will be directed to the dashboard to configure and toggle your flag ON. After you’ve done so, head back to your developer environment. Make an .env file using the command

    cp .env.example .env
    

    and add the following lines:

    $ VITE_LAUNCHDARKLY_CLIENT_SIDE_ID=XXXXXXXX
    > VITE_OBSERVABILITY_PROJECT_ID=”<YOUR_PROJECT_NAME_ON_LAUNCHDARKLY>”
    

    Be sure to replace the values above with the project name on the LaunchDarkly dashboard and the client-side ID from the Test environment, as seen in the screenshot below.

    Install the SDK and implement user feedback function

    Click on the Feedback section at the top of the page.
    Click on the Feedback tab at the top.
    Click on View setup guide button to find the SDK to copy and paste for your project.
    In order to keep your working directory organized, make a new file named sendFeedback.ts in the src subdirectory. Copy and paste the SDK from the feedback setup guide to this new file.
    This module sends the user feedback to LaunchDarkly using the client side SDK. The LaunchDarkly client then tracks these events for the developer to analyze relevant data such as sentiment, feedback, prompt, and user’s session data.

    Create the feedback modal file

    Here’s the fun part - implementing the buttons and functions necessary to collect feedback from users.
    Create a new file within the src subdirectory and name it feedbackPopover.tsx.
    Click on the React examples tab of the setup guide to access the code snippet. This tutorial will implement a feedback popover that allows users to type in a message and provide a thumbs up or thumbs down to describe their experience.
    Let’s alter the FeedbackPopover function a bit. Scroll down to function and add an export keyword in the beginning so that function can be executed in the main application.

    Navigate back to the app.tsx file to import the newly created file:

    import { FeedbackPopover } from './feedbackPopover.tsx'
    

    Notice the comment for the FeedbackPopover component that utilizes the ldClient. Copy the line and scroll down to the bottom of the page where the code returns JSX. Replace the placeholder comment {/* Add the feedback here */} with the FeedbackPopover class so it looks like the code snippet below:

    return (
      <>
        ...
        <div>
          <h2>Send Feedback Below</h2>
          <div>
            <FeedbackPopover ldClient={client} />
          </div>
        </div>
      </>
    );
    

    Great! Now the FeedbackPopover button will appear on the website. This feedback widget can be customized later on so that you can collect user feedback tied to feature flag variations.
    This React file defines reusable, stateless icon components. The feedback function visibility is controlled by the isOpen prop. The user’s feedback is stored, and a sentiment is determined accordingly, and no input validation is required.

    Enable the client SDK in production environments

    If you are working in the Production environment, you might be prompted to enable the JavaScript Client SDK to access the flag’s key.
    You can toggle this on and off in the flag’s dashboard in the bottom right hand corner under the Advanced controls section.

    Test out the user feedback function

    Open up a terminal window and install the dependencies in the working directory with npm i. Run the dev server with npm run dev.
    You should get output like this:

    $ VITE v6.0.11 ready in 413 ms
    ➜ Local: http://localhost:5173/
    ➜ Network: use --host to expose
    ➜ press h + enter to show help
    

    Open up the http://localhost:5173/ page to see the following screen:

    Run flag evaluations

    The starter code is presented to you so all you have to do is try it out! If you named your flag “feedback demo”, the flag key here should already be set. If not, you’ll want to change this code to reflect your flag key name.
    Click the Feedback button and add some sample data to make sure the app is working. I’ll make 3 evaluations to populate the flag with data to examine.
    Wait a minute before you can see your evaluation and feedback results in the dashboard.

    Assess the user feedback on dashboard

    Confirm that the feedback went through. Go back to the Feedback dashboard and refresh the page. See the latest comment that was created and how the sentiment was analyzed.
    To see the evaluations, switch over to the Audience tab.
    The nifty part of this new feature is examining the data. LaunchDarkly allows you to filter by sentiments, variations, and a specific timeframe.
    However, filtering by different dates will only apply to the variations statistics in the chart above and the table below.
    Data regarding the sentiment distribution and valuations are only shown in the table.
    Go ahead and play around with the dashboard to filter the data accordingly to the project’s needs.

    Create different sessions and customize user behavior

    Every project has a different use case - perhaps your audience plays video games, or they are a subscribing member. You can create different contexts and customize them specifically for your use case.
    LaunchDarkly also allows you to create different sessions to observe the user’s behavior.
    Click on a user key and see the user context.

    Observe the session replay

    If you have observability enabled, then you can see how the person interacts with your website and watch them enter the feedback. Click on the blue play button under the “Session” column and observe how the user interacts with the website before sending in a comment for feedback evaluation.

    What’s next for collecting metrics and user feedback?

    Congratulations on taking the next step in building more informed, user-driven experiences. The built in qualitative user feedback tool allows you and your team to quickly and conveniently enable and track metrics for specific feature flags without interrupting your development process.
    Now, every feature rollout can capture both measurable impact and meaningful user sentiment, helping your team make faster, smarter decisions backed by real insight. It’s time to formulate better questions to learn from these new features.
    Send me an email at [email protected], connect with me on LinkedIn, or follow @diane.dot.dev on social media to let us know what you’re building.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 5, 2025
    LaunchDarkly logo

    LaunchDarkly

    LLM playground

    LaunchDarkly unveils a secure LLM playground for pre production testing. Create, run, and compare evaluations with automatic scoring in a sandbox, manage API keys, view detailed run results, and prep models before production deployment.

    Overview

    This topic explains how to use the LLM playground to create and run evaluations that measure the quality of model outputs before deployment. The playground provides a secure sandbox where you can experiment with prompts, models, and parameters. You can view model outputs, attach evaluation criteria to assess quality, and use a separate LLM to automatically score or analyze completions according to a rubric you define.

    The playground helps AI and ML teams validate quality before deploying to production. It supports fast, controlled testing so you can refine prompts, models, and parameters early in the development cycle. The playground also establishes the foundation for offline evaluations, creating a clear path from experimentation to production deployment within LaunchDarkly.

    The playground complements online evaluations. Online evaluations measure quality in production using attached judges. The playground focuses on pre-production testing and refinement.

    Who uses the playground

    The playground is designed for AI developers, ML engineers, product engineers, and PMs building and shipping AI-powered products. It provides a unified environment for evaluating models, comparing configurations, and promoting the best-performing variations.

    Use the playground to:

    • Create and run evaluations that test model outputs.
    • Measure model quality using criteria such as factuality, groundedness, or relevance.
    • Use an evaluator LLM to automatically score or analyze completions.
    • Adjust prompts, parameters, or variables to improve performance.
    • Manage and secure provider credentials in the Manage API keys section.

    Each evaluation can generate multiple runs. When you change an evaluation and create a new run, earlier runs remain available with their original data.

    How the playground works

    The playground uses the same evaluation framework as online evaluations but runs evaluations in a controlled sandbox. Each evaluation contains messages, variables, model parameters, and optional evaluation criteria. When you run an evaluation, the playground records the model response, token usage, latency, and scores for each criterion.

    Teams can define reusable evaluations that combine prompts, models, parameters, and variables or context. You can run each evaluation to generate completions and view structured results. You can also attach a secondary LLM to automatically score or analyze each response.

    Data in the playground is temporary. Test data is deleted after 60 days unless you save the evaluation. LaunchDarkly integrations securely store provider credentials and remove them at the end of each session.

    Each playground session includes:

    • Evaluation setup: messages, parameters, variables, and provider details
    • Run results: model outputs, token counts, latency, and evaluation scores
    • Isolation: evaluations cannot modify production configurations
    • Retention: data expires after 60 days unless you save the evaluation

    When you click Save and run, LaunchDarkly securely sends your configuration to the model provider and returns the model output and evaluation results as a new run.

    Example structured run output

    {
      "accuracy": { "score": 0.9, "reason": "Accurate and complete answer." },
      "groundedness": { "score": 0.85, "reason": "Mostly supported by source context." },
      "latencyMs": 1200,
      "inputTokens": 420,
      "outputTokens": 610
    }
    

    Create and manage evaluations

    You can use the playground to create, edit, and delete evaluations. Each evaluation can include messages, model parameters, criteria, and variables.

    Create an evaluation

    1. Navigate to your project.
    2. In the left navigation, click Playground.
    3. Click New evaluation. The “Input” tab opens.
    4. Click Untitled and enter a name for the evaluation.
    5. Select a model provider and model.
    6. Add or edit messages for the System, User, and Assistant roles. These messages define how the model interacts in a conversation:
      • System provides context or instructions that set the model’s behavior and tone.
      • User represents the input prompt or question from an end user.
      • Assistant represents the model’s response. You can include an example or leave it blank to view generated results.
    7. Attach one or more evaluation criteria. Each criterion defines a measurement, such as factuality or relevance, and includes configurable options such as threshold or control prompt.
    8. (Optional) Add variables to reuse dynamic values, such as {{productName}} or context attributes like {{ldContext.city}}.
    9. (Optional) Attach a scoring LLM to automatically evaluate each output.
    10. Click Save and run. The playground creates a new run and adds an output row with model response and evaluation scores.

    Edit an evaluation

    You can edit an evaluation at any time. Changes apply to new runs only. Earlier runs retain their original data.

    To edit an evaluation:

    1. In the Playground list, click the evaluation you want to edit.
    2. Update messages, model, parameters, variables, or criteria.
    3. Click Save and run to generate a new run with updated evaluation data.

    Delete an evaluation

    To delete an evaluation:

    1. In the Playground list, find the evaluation you want to delete.
    2. Click the three-dot overflow menu.
    3. Click Delete evaluation and confirm.

    Deleting an evaluation removes its configuration and associated runs from the playground.

    View evaluation runs

    The Output tab shows all runs for an evaluation.

    Each run includes:

    • Evaluation summary
    • Scores for each criterion
    • Input, output, and total tokens used
    • Latency

    Select a run to view:

    • Raw output: the exact text or JSON object returned by the model
    • Evaluation results: scores and reasoning for each evaluation criterion

    Runs update automatically when new results are available.

    Manage API keys

    The playground uses the provider credentials stored in your LaunchDarkly project to run evaluations. You can add or update these credentials from the Manage API keys section to ensure your evaluations use the correct model access.

    To manage provider API keys:

    1. In the upper-right corner of the playground page, click Manage API keys to open the “Integrations” page with the “AI Config Test Run” integration selected.
    2. Click Add integration.
    3. Enter a name.
    4. Select a model provider.
    5. Enter the API key for your selected provider.
    6. Read the Integration Terms and Conditions and check the box to confirm.
    7. Click Save configuration.

    Only one active credential per provider is supported per project. LaunchDarkly does not retain API keys beyond the session.

    Privacy

    The playground may send prompts and variables to your configured model provider for evaluation. LaunchDarkly does not store or share your inputs, credentials, or outputs outside your project.

    If your organization restricts sharing personal data with external providers, ensure that prompts and variables exclude sensitive information.

    To learn more, read AI Configs and information privacy.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • First seen by Releasebot:
      Dec 1, 2025
    LaunchDarkly logo

    LaunchDarkly

    Introducing Audiences: See who your flags are really impacting

    LaunchDarkly unveils Flag Audiences and session replay, letting you see who evaluated a flag, filter by variation, and replay sessions from the Observability SDK. This ties evaluations to users and observability data for faster incident resolution. Available to Guardian customers for Guarded Releases and standard rollouts.

    Audience view provides instant visibility into who evaluated your feature and what they experienced.

    TL;DR

    • LaunchDarkly now lets you see who evaluated your flag with the new Flag Audiences view.
    • View all evaluations for a flag, switch between context kinds, and filter by variation.
    • Customers using the Observability SDK can access session replays to see exactly what users experienced, without leaving LaunchDarkly.

    Introducing Flag Audiences and Sessions

    When incidents happen or performance drops after a rollout, one of the first questions teams ask is: “Who was impacted?” Until now, finding that answer meant manually cross-referencing logs, traces, and flag histories across multiple tools, which can be slow and error-prone. That’s why we’re excited to announce Audiences, a new capability that connects feature flag evaluations directly to the users and sessions behind them. This lets you trace impact in real time, link flags to observability data, and resolve issues faster.

    Know exactly who saw your feature

    When a flag changes, LaunchDarkly now shows you who evaluated it, whether that’s a specific user, account, or device. From the new Audience tab in your flag’s dashboard, you can:

    • View all users or contexts that evaluated the flag.
    • Filter by variation (control, treatment, or custom).
    • See when they last evaluated the flag.
    • (If you have the Observability SDK downloaded) Watch their most recent session replay to understand what happened before and after the evaluation.

    This gives you a full, traceable record of flag activity and an actionable view of your rollout audience.

    Example: Investigating a regression

    Imagine you’ve rolled out a new feature behind a flag, and your monitoring system reports an increase in 500 (Internal Server) errors. With Audience, you can open the flag, filter for users served the “treatment” variation, and instantly see which sessions encountered errors. You can even replay those sessions to see what actions led up to the issue, all from within LaunchDarkly.

    Why knowing your flag audience matters

    Benefits of the Audience feature include:

    • Faster incident resolution. Quickly identify which users or sessions were affected by a flag change and why, with no manual log digging or tool switching required.
    • Deeper visibility, fewer silos. See flag evaluations, user sessions, and observability data together in one unified view for instant context during investigations.
    • Smarter collaboration across teams. Give SREs, engineers, and PMs a shared source of truth for debugging, postmortems, and release validation.

    The Audience view is now available to all LaunchDarkly Guardian customers for both Guarded Releases and regular flag rollouts.

    Want to see the modern way of shipping code for yourself? Learn how Guarded Releases works.

    Original source Report a problem

Related vendors