LaunchDarkly Release Notes

Last updated: Dec 12, 2025

  • Dec 9, 2025
    • Parsed from source:
      Dec 9, 2025
    • Detected by Releasebot:
      Dec 12, 2025
    LaunchDarkly logo

    LaunchDarkly

    Collecting user feedback in your app with feature flags

    LaunchDarkly unveils a built in user feedback tool that ties sentiment and session data to feature flags. The guide walks you through enabling tracking, adding a feedback UI, and viewing results in the dashboard for faster data driven decisions.

    If you’re a builder, you understand that it’s crucial to observe how users respond to new features, interface changes, and experimental variations.
    Feedback is an essential part of the product development lifecycle, helping teams validate decisions and iterate faster. The new user feedback tool makes it convenient and possible for you and your team to enable metric tracking for specific feature flags within minutes.
    This tutorial will guide you to enable and view valuable user feedback metrics directly from the LaunchDarkly dashboard. A sample app will be provided for you to clone and follow along on your own machine, but you can skip over to the “Install the SDK and implement user feedback function” if you prefer to use your own app. This features is available for JavaScript & TypeScript client-side SDKs. React components are also provided for convenient integration.

    Why you should think about this today

    Learning how to track important metrics such as user feedback is often overlooked. It can also be a struggle to connect user sentiment directly to a feature flag or existing experimentation.
    Perhaps your problem might be relying on generic feedback tools that lack context about which features users are experiencing. Ideally you would want to see a list of feedback tied to a feature that has been shipped behind a flag.
    Don’t spend more time guessing where a user dropped off when this new LaunchDarkly feature can provide more contextual feedback and live session replay. The qualitative feedback helps narrow down the decision making process so that your team can ship features faster, catch issues early, or roll back a variation and adjust a specific prompt. It is also a great way to understand how different users behave - whether you’re working with gamers, beta testers, premium users, or internal users.

    Requirements

    • LaunchDarkly account. If you haven’t done so already, create a free account
    • An app that uses client side flags. A sample app will be provided below
    • Modern node version. I would recommend the node version manager (nvm) if you don’t have node installed already
    • Bonus: LaunchDarkly feature flags set up and enabled already

    If you would like to follow along with this tutorial with our starter code, go ahead and clone this project:

    $ git clone [email protected]:launchdarkly-labs/ld-feedback-tutorial.git
    

    Create the feature flag

    If you and your team do not have an existing feature flag already, go to the LaunchDarkly dashboard and create your first feature flag. Give it a name such as “feedback demo”.
    You will be directed to the dashboard to configure and toggle your flag ON. After you’ve done so, head back to your developer environment. Make an .env file using the command

    cp .env.example .env
    

    and add the following lines:

    $ VITE_LAUNCHDARKLY_CLIENT_SIDE_ID=XXXXXXXX
    > VITE_OBSERVABILITY_PROJECT_ID=”<YOUR_PROJECT_NAME_ON_LAUNCHDARKLY>”
    

    Be sure to replace the values above with the project name on the LaunchDarkly dashboard and the client-side ID from the Test environment, as seen in the screenshot below.

    Install the SDK and implement user feedback function

    Click on the Feedback section at the top of the page.
    Click on the Feedback tab at the top.
    Click on View setup guide button to find the SDK to copy and paste for your project.
    In order to keep your working directory organized, make a new file named sendFeedback.ts in the src subdirectory. Copy and paste the SDK from the feedback setup guide to this new file.
    This module sends the user feedback to LaunchDarkly using the client side SDK. The LaunchDarkly client then tracks these events for the developer to analyze relevant data such as sentiment, feedback, prompt, and user’s session data.

    Create the feedback modal file

    Here’s the fun part - implementing the buttons and functions necessary to collect feedback from users.
    Create a new file within the src subdirectory and name it feedbackPopover.tsx.
    Click on the React examples tab of the setup guide to access the code snippet. This tutorial will implement a feedback popover that allows users to type in a message and provide a thumbs up or thumbs down to describe their experience.
    Let’s alter the FeedbackPopover function a bit. Scroll down to function and add an export keyword in the beginning so that function can be executed in the main application.

    Navigate back to the app.tsx file to import the newly created file:

    import { FeedbackPopover } from './feedbackPopover.tsx'
    

    Notice the comment for the FeedbackPopover component that utilizes the ldClient. Copy the line and scroll down to the bottom of the page where the code returns JSX. Replace the placeholder comment {/* Add the feedback here */} with the FeedbackPopover class so it looks like the code snippet below:

    return (
      <>
        ...
        <div>
          <h2>Send Feedback Below</h2>
          <div>
            <FeedbackPopover ldClient={client} />
          </div>
        </div>
      </>
    );
    

    Great! Now the FeedbackPopover button will appear on the website. This feedback widget can be customized later on so that you can collect user feedback tied to feature flag variations.
    This React file defines reusable, stateless icon components. The feedback function visibility is controlled by the isOpen prop. The user’s feedback is stored, and a sentiment is determined accordingly, and no input validation is required.

    Enable the client SDK in production environments

    If you are working in the Production environment, you might be prompted to enable the JavaScript Client SDK to access the flag’s key.
    You can toggle this on and off in the flag’s dashboard in the bottom right hand corner under the Advanced controls section.

    Test out the user feedback function

    Open up a terminal window and install the dependencies in the working directory with npm i. Run the dev server with npm run dev.
    You should get output like this:

    $ VITE v6.0.11 ready in 413 ms
    ➜ Local: http://localhost:5173/
    ➜ Network: use --host to expose
    ➜ press h + enter to show help
    

    Open up the http://localhost:5173/ page to see the following screen:

    Run flag evaluations

    The starter code is presented to you so all you have to do is try it out! If you named your flag “feedback demo”, the flag key here should already be set. If not, you’ll want to change this code to reflect your flag key name.
    Click the Feedback button and add some sample data to make sure the app is working. I’ll make 3 evaluations to populate the flag with data to examine.
    Wait a minute before you can see your evaluation and feedback results in the dashboard.

    Assess the user feedback on dashboard

    Confirm that the feedback went through. Go back to the Feedback dashboard and refresh the page. See the latest comment that was created and how the sentiment was analyzed.
    To see the evaluations, switch over to the Audience tab.
    The nifty part of this new feature is examining the data. LaunchDarkly allows you to filter by sentiments, variations, and a specific timeframe.
    However, filtering by different dates will only apply to the variations statistics in the chart above and the table below.
    Data regarding the sentiment distribution and valuations are only shown in the table.
    Go ahead and play around with the dashboard to filter the data accordingly to the project’s needs.

    Create different sessions and customize user behavior

    Every project has a different use case - perhaps your audience plays video games, or they are a subscribing member. You can create different contexts and customize them specifically for your use case.
    LaunchDarkly also allows you to create different sessions to observe the user’s behavior.
    Click on a user key and see the user context.

    Observe the session replay

    If you have observability enabled, then you can see how the person interacts with your website and watch them enter the feedback. Click on the blue play button under the “Session” column and observe how the user interacts with the website before sending in a comment for feedback evaluation.

    What’s next for collecting metrics and user feedback?

    Congratulations on taking the next step in building more informed, user-driven experiences. The built in qualitative user feedback tool allows you and your team to quickly and conveniently enable and track metrics for specific feature flags without interrupting your development process.
    Now, every feature rollout can capture both measurable impact and meaningful user sentiment, helping your team make faster, smarter decisions backed by real insight. It’s time to formulate better questions to learn from these new features.
    Send me an email at [email protected], connect with me on LinkedIn, or follow @diane.dot.dev on social media to let us know what you’re building.

    Original source Report a problem
  • Dec 1, 2025
    • Parsed from source:
      Dec 1, 2025
    • Detected by Releasebot:
      Dec 7, 2025
    LaunchDarkly logo

    LaunchDarkly

    Playgrounds for AI Configs

    LaunchDarkly launches Playgrounds to test and compare AI Configs without code. Define reusable evaluations with prompts, models, and variables, run on demand, and auto score with a separate LLM. Future updates promise bulk evaluations and dataset uploads.

    Learn more

    You can now use Playgrounds in LaunchDarkly to quickly test and compare AI Configs without writing any custom code. Playgrounds let teams define reusable evaluations that bundle prompts, models, parameters, and variables, then run them on demand to generate completions and inspect results in a structured, repeatable way.

    Playgrounds also support automatic scoring: attach a separate LLM to evaluate each completion using your own rubric (for example, correctness, relevance, or toxicity). This shortens the iteration loop and makes it easier to understand which configuration performs best before you roll it out.

    Future updates will include bulk evaluations, dataset uploads, and more advanced comparison tools, all powered by the same evaluation service underlying Playgrounds.

    Docs

    Learn more

    Docs

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 19, 2025
    LaunchDarkly logo

    LaunchDarkly

    Observability settings

    LaunchDarkly launches early access observability features for sessions, errors, logs and traces. Enable observability from the billing page and tune filtering, rage-click sensitivity, sourcemaps, and auto-resolve with rule-based ingestion controls.

    This feature is in Early Access

    LaunchDarkly’s observability features are publicly available in early access. Enable observability in the billing page.
    They currently require the LaunchDarkly observability SDKs and the JavaScript, React Web, or Vue SDK.
    If you are interested in participating in the Early Access Program for our upcoming observability plugins for server-side SDKs, sign up here.

    Overview

    This topic describes the project-level settings available for sessions, errors, logs, and traces.
    In the left navigation of the LaunchDarkly UI, expand Observe to view them.
    To view or update project-level settings for these features:

    • Click the project dropdown to open the project menu.
    • Select Project settings.
    • Click Observability. The Observability settings page appears.

    The following sections describe the available settings.

    Session settings

    You can configure the following settings for sessions in your project:

    • Excluded users. This setting excludes sessions from particular end users, based on their context key or email address.
    • Rage clicks. These settings adjust the sensitivity for detecting “rage clicks,” or occasions when end users repeatedly click an element in your application, indicating frustration. You can set the Elapsed time, Radius, and Minimum clicks. These settings control whether a search for session replays that uses the has_rage_clicks attribute will return a given session. By default, LaunchDarkly considers end-user activity a rage click when there exists a two-second or longer period in which an end user clicks five or more times within a radius of eight pixels.
      Click Save to save your settings.

    Error settings

    You can configure the following settings for errors in your project:

    • Sourcemaps. If you have uploaded sourcemaps, you can view them here.
    • Auto-resolve stale errors. When enabled, this setting automatically sets the status of an error to “Resolved” after the time period you select.
      Click Save to save your settings.

    Filters

    Filters help you manage the ingestion of sessions, errors, logs, or traces that you send to LaunchDarkly. This is useful if you know certain signals which are not relevant to your application or are not actionable. Any excluded signals do not count against your observability quotas.
    To configure ingestion filters:

    • Navigate to the Observability project settings page.
    • From the Filters section, click Edit next to the type of signal you want to configure.
    • (Optional) Configure filter rules to manage ingestion of sessions, errors, logs, or traces.
    • (Optional) Set the Max ingest per minute. This setting rate limits the maximum number of data points ingested in a one-minute window. For example, you may configure a rate limit of 100 per minute. This lets you limit the number of data points recorded in case of a significant spike in use of your application.
    • Click Save.

    Rule evaluation order

    Rules are evaluated in order, from top to bottom. Drag and drop the rules to reorder them to fit your project’s needs. The first enabled rule that matches the criteria applies its filter operation and rate.

    Rules

    To add a filter rule:

    • Click Add rule.
    • Set a rule name.
    • Review the filter rule operation. Exclusion rules are used for sessions, errors, and logs. Inclusion rules are used for traces. You cannot change these settings.
    • Set a query:
      • Click the Filter… placeholder and select an attribute from the dropdown. For example, you can filter sessions based on active_length.
      • Select an operator from the dropdown. For example, you can filter by greater than, >.
      • Enter a value for your expression. For example, you can enter 8s for eight seconds.
    • Set the rules rate (%). For each signal that LaunchDarkly receives, it makes a randomized decision according to the rules rate whether to apply the include or exclude filter operation.
      • For example, if an exclusion rule has a 20% rate, then 20% of the signals that match the rule’s query are excluded and the remaining 80% are included.
    • Set the rule On or Off to enable or disable the rule.
    • Click Save.

    Records with no matching rules

    If a signal does not match any rules query, then it is included by LaunchDarkly.

    Here is an example of multiple log filter rules:
    An example of multiple log filter rules.

    Here is how rule order controls rule evaluation:

    • Logs with level=ERROR and service_name=example-service are always included, the first rule matches, therefore the second and third rule are not reached.
    • Logs with level=DEBUG and service_name=example-service are always excluded, the first rule is skipped, the second rule matches, therefore the third rule is not reached.
    • Logs with level=INFO and service_name=example-service are 80% excluded and 20% included, the first and second rule are skipped, and the third rule matches.
    • Logs with level=INFO and service_name=new-service are always included, all three rules are skipped, therefore the log is ingested.
    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 16, 2025
    LaunchDarkly logo

    LaunchDarkly

    Meet the new navigation in LaunchDarkly

    LaunchDarkly unveils a cleaner, more focused navigation with collapsible sections, simplified visuals, faster shortcuts, a refined Create action, and improved search. The update reduces visual noise and helps teams move faster, now available to all users.

    Here’s what’s new

    A cleaner, more focused navigation reduces noise and helps you move faster.

    It’s been about a year and a half since we introduced a new and improved LaunchDarkly experience, a major redesign that unified environments, improved navigation, and created more clarity across the app.

    Since then, our platform has expanded, and our navigation has expanded with it. Over time, the number of items and icons multiplied, and things started to feel a little overwhelming. Our customers provided consistent commentary:
    “I can’t see what matters most.”
    “The shortcuts are buried.”
    “It’s powerful, but it’s a lot.” We prioritized a refresh based on this feedback.

    This update makes the navigation cleaner, more focused, and better aligned with how you actually work. It helps keep what’s important in view and gives you back control of your screen real estate.

    Here’s what’s new:

    • Collapsible sections so you can keep open the sections you interact with the most and hide what you don’t. Your layout will stay exactly how you leave it, even on page refresh.
    • Simplified visuals with fewer icons and improved spacing, making it easier to scan the navigation and find what you’re looking for.
    • Shortcuts moved up for faster access to your most frequently visited flags. If you haven’t tried this feature before, Shortcuts let you bookmark filtered views of your flags dashboard for quick access to the flags you work with most.
    • A refined Create action that remains easy to find but no longer competes with key actions on the page.
    • An improved search experience that offers quick, keyboard-friendly access to any part of the platform.

    These changes are now available to all LaunchDarkly users.

    This work builds on the progress we’ve made over the past year and a half. It reduces visual noise, adds flexibility, and helps teams move faster and stay focused. We’re excited for you to experience it, and we’d love to hear what you think. Share your reaction with us at [email protected].

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 5, 2025
    LaunchDarkly logo

    LaunchDarkly

    LLM playground

    LaunchDarkly unveils a secure LLM playground for pre production testing. Create, run, and compare evaluations with automatic scoring in a sandbox, manage API keys, view detailed run results, and prep models before production deployment.

    Overview

    This topic explains how to use the LLM playground to create and run evaluations that measure the quality of model outputs before deployment. The playground provides a secure sandbox where you can experiment with prompts, models, and parameters. You can view model outputs, attach evaluation criteria to assess quality, and use a separate LLM to automatically score or analyze completions according to a rubric you define.

    The playground helps AI and ML teams validate quality before deploying to production. It supports fast, controlled testing so you can refine prompts, models, and parameters early in the development cycle. The playground also establishes the foundation for offline evaluations, creating a clear path from experimentation to production deployment within LaunchDarkly.

    The playground complements online evaluations. Online evaluations measure quality in production using attached judges. The playground focuses on pre-production testing and refinement.

    Who uses the playground

    The playground is designed for AI developers, ML engineers, product engineers, and PMs building and shipping AI-powered products. It provides a unified environment for evaluating models, comparing configurations, and promoting the best-performing variations.

    Use the playground to:

    • Create and run evaluations that test model outputs.
    • Measure model quality using criteria such as factuality, groundedness, or relevance.
    • Use an evaluator LLM to automatically score or analyze completions.
    • Adjust prompts, parameters, or variables to improve performance.
    • Manage and secure provider credentials in the Manage API keys section.

    Each evaluation can generate multiple runs. When you change an evaluation and create a new run, earlier runs remain available with their original data.

    How the playground works

    The playground uses the same evaluation framework as online evaluations but runs evaluations in a controlled sandbox. Each evaluation contains messages, variables, model parameters, and optional evaluation criteria. When you run an evaluation, the playground records the model response, token usage, latency, and scores for each criterion.

    Teams can define reusable evaluations that combine prompts, models, parameters, and variables or context. You can run each evaluation to generate completions and view structured results. You can also attach a secondary LLM to automatically score or analyze each response.

    Data in the playground is temporary. Test data is deleted after 60 days unless you save the evaluation. LaunchDarkly integrations securely store provider credentials and remove them at the end of each session.

    Each playground session includes:

    • Evaluation setup: messages, parameters, variables, and provider details
    • Run results: model outputs, token counts, latency, and evaluation scores
    • Isolation: evaluations cannot modify production configurations
    • Retention: data expires after 60 days unless you save the evaluation

    When you click Save and run, LaunchDarkly securely sends your configuration to the model provider and returns the model output and evaluation results as a new run.

    Example structured run output

    {
      "accuracy": { "score": 0.9, "reason": "Accurate and complete answer." },
      "groundedness": { "score": 0.85, "reason": "Mostly supported by source context." },
      "latencyMs": 1200,
      "inputTokens": 420,
      "outputTokens": 610
    }
    

    Create and manage evaluations

    You can use the playground to create, edit, and delete evaluations. Each evaluation can include messages, model parameters, criteria, and variables.

    Create an evaluation

    1. Navigate to your project.
    2. In the left navigation, click Playground.
    3. Click New evaluation. The “Input” tab opens.
    4. Click Untitled and enter a name for the evaluation.
    5. Select a model provider and model.
    6. Add or edit messages for the System, User, and Assistant roles. These messages define how the model interacts in a conversation:
      • System provides context or instructions that set the model’s behavior and tone.
      • User represents the input prompt or question from an end user.
      • Assistant represents the model’s response. You can include an example or leave it blank to view generated results.
    7. Attach one or more evaluation criteria. Each criterion defines a measurement, such as factuality or relevance, and includes configurable options such as threshold or control prompt.
    8. (Optional) Add variables to reuse dynamic values, such as {{productName}} or context attributes like {{ldContext.city}}.
    9. (Optional) Attach a scoring LLM to automatically evaluate each output.
    10. Click Save and run. The playground creates a new run and adds an output row with model response and evaluation scores.

    Edit an evaluation

    You can edit an evaluation at any time. Changes apply to new runs only. Earlier runs retain their original data.

    To edit an evaluation:

    1. In the Playground list, click the evaluation you want to edit.
    2. Update messages, model, parameters, variables, or criteria.
    3. Click Save and run to generate a new run with updated evaluation data.

    Delete an evaluation

    To delete an evaluation:

    1. In the Playground list, find the evaluation you want to delete.
    2. Click the three-dot overflow menu.
    3. Click Delete evaluation and confirm.

    Deleting an evaluation removes its configuration and associated runs from the playground.

    View evaluation runs

    The Output tab shows all runs for an evaluation.

    Each run includes:

    • Evaluation summary
    • Scores for each criterion
    • Input, output, and total tokens used
    • Latency

    Select a run to view:

    • Raw output: the exact text or JSON object returned by the model
    • Evaluation results: scores and reasoning for each evaluation criterion

    Runs update automatically when new results are available.

    Manage API keys

    The playground uses the provider credentials stored in your LaunchDarkly project to run evaluations. You can add or update these credentials from the Manage API keys section to ensure your evaluations use the correct model access.

    To manage provider API keys:

    1. In the upper-right corner of the playground page, click Manage API keys to open the “Integrations” page with the “AI Config Test Run” integration selected.
    2. Click Add integration.
    3. Enter a name.
    4. Select a model provider.
    5. Enter the API key for your selected provider.
    6. Read the Integration Terms and Conditions and check the box to confirm.
    7. Click Save configuration.

    Only one active credential per provider is supported per project. LaunchDarkly does not retain API keys beyond the session.

    Privacy

    The playground may send prompts and variables to your configured model provider for evaluation. LaunchDarkly does not store or share your inputs, credentials, or outputs outside your project.

    If your organization restricts sharing personal data with external providers, ensure that prompts and variables exclude sensitive information.

    To learn more, read AI Configs and information privacy.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 1, 2025
    LaunchDarkly logo

    LaunchDarkly

    Introducing Audiences: See who your flags are really impacting

    LaunchDarkly unveils Flag Audiences and session replay, letting you see who evaluated a flag, filter by variation, and replay sessions from the Observability SDK. This ties evaluations to users and observability data for faster incident resolution. Available to Guardian customers for Guarded Releases and standard rollouts.

    Audience view provides instant visibility into who evaluated your feature and what they experienced.

    TL;DR

    • LaunchDarkly now lets you see who evaluated your flag with the new Flag Audiences view.
    • View all evaluations for a flag, switch between context kinds, and filter by variation.
    • Customers using the Observability SDK can access session replays to see exactly what users experienced, without leaving LaunchDarkly.

    Introducing Flag Audiences and Sessions

    When incidents happen or performance drops after a rollout, one of the first questions teams ask is: “Who was impacted?” Until now, finding that answer meant manually cross-referencing logs, traces, and flag histories across multiple tools, which can be slow and error-prone. That’s why we’re excited to announce Audiences, a new capability that connects feature flag evaluations directly to the users and sessions behind them. This lets you trace impact in real time, link flags to observability data, and resolve issues faster.

    Know exactly who saw your feature

    When a flag changes, LaunchDarkly now shows you who evaluated it, whether that’s a specific user, account, or device. From the new Audience tab in your flag’s dashboard, you can:

    • View all users or contexts that evaluated the flag.
    • Filter by variation (control, treatment, or custom).
    • See when they last evaluated the flag.
    • (If you have the Observability SDK downloaded) Watch their most recent session replay to understand what happened before and after the evaluation.

    This gives you a full, traceable record of flag activity and an actionable view of your rollout audience.

    Example: Investigating a regression

    Imagine you’ve rolled out a new feature behind a flag, and your monitoring system reports an increase in 500 (Internal Server) errors. With Audience, you can open the flag, filter for users served the “treatment” variation, and instantly see which sessions encountered errors. You can even replay those sessions to see what actions led up to the issue, all from within LaunchDarkly.

    Why knowing your flag audience matters

    Benefits of the Audience feature include:

    • Faster incident resolution. Quickly identify which users or sessions were affected by a flag change and why, with no manual log digging or tool switching required.
    • Deeper visibility, fewer silos. See flag evaluations, user sessions, and observability data together in one unified view for instant context during investigations.
    • Smarter collaboration across teams. Give SREs, engineers, and PMs a shared source of truth for debugging, postmortems, and release validation.

    The Audience view is now available to all LaunchDarkly Guardian customers for both Guarded Releases and regular flag rollouts.

    Want to see the modern way of shipping code for yourself? Learn how Guarded Releases works.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 12, 2025
    LaunchDarkly logo

    LaunchDarkly

    Flag lifecycle settings

    LaunchDarkly adds configurable flag lifecycle settings to control when flags are archived. Changes apply project-wide and take effect immediately, with Insights showing pre and post-update data. Learn how to set minimum age, prerequisites, variations, and archiving scope.

    Overview

    This topic describes flag lifecycle settings and how to update them. Flag lifecycle settings let you customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived. Archiving flags is a good practice to help clean up flags you no longer need.
    Custom lifecycle settings apply only to critical environments. LaunchDarkly will not evaluate non-critical environments for a flag’s archive-readiness.

    Update lifecycle settings

    When you update the Lifecycle settings for a project, the changes apply to all flags in your project. This means some flags previously considered ready to archive may no longer be ready to archive, and some flags previously considered not yet ready to archive may now be ready. On the Flags list, these changes apply immediately.
    In Launch Insights, data from before you update the settings reflects your older definition, and data from after you update the settings reflects your newer definition.

    To customize flag lifecycle settings:

    • Click the project dropdown. The project menu appears:
      The project menu.
    • Click Project settings.
    • Click Lifecycle settings. The flag lifecycle settings panel appears:
      The flag lifecycle settings panel.
    • Customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived:
      • Select the Minimum flag age. We recommend setting this to at least as long as it takes to release most features in your critical environments.
      • Select Before code removal, there should be no targeting changes for at least to set how long a flag’s targeting rules must be unchanged before the flag is ready to archive.
      • Select Before archiving, there should be no evaluations for at least to set how long a flag should have no evaluations before the flag is ready to archive.
      • Select whether a flag may a prerequisite. We recommend checking the Must not be a prerequisite for other flags checkbox.
      • Select whether flags may serve one or many variations and still be ready archive. We recommend checking the Must be serving one variation checkbox.
      • Select whether the archival checks apply to all flags or only temporary flags. We recommend checking the Must be temporary checkbox.
    • Click Save.

    To reset the flag lifecycle settings, navigate to the flag lifecycle settings panel and select Reset to default.

    Original source Report a problem
  • Nov 13, 2025
    • Parsed from source:
      Nov 13, 2025
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly AI SDK Vercel Provider for Server-Side JavaScript

    LaunchDarkly rolls out AI SDK Vercel Provider for Server-Side JavaScript in alpha, enabling AI model integration with LaunchDarkly flags. Quick setup and provider packages get you started, but note this is not production ready.

    LaunchDarkly AI SDK Vercel Provider for Server-Side JavaScript

    ⛔️⛔️⛔️⛔️
    Caution
    This library is a alpha version and should not be considered ready for production use while this message is visible.

    ☝️☝️☝️☝️☝️☝️

    LaunchDarkly overview

    LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!

    Quick Setup

    This package provides Vercel AI SDK integration for the LaunchDarkly AI SDK. The simplest way to use it is with the LaunchDarkly AI SDK's initChat method:

    • Install the required packages:
      npm install @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-vercel --save
      # or
      # yarn add @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-vercel
      
    • Create a chat session and use it:
      import { init } from '@launchdarkly/node-server-sdk';
      import { initAi } from '@launchdarkly/server-sdk-ai';
      // Initialize LaunchDarkly client
      const ldClient = init(sdkKey);
      const aiClient = initAi(ldClient);
      // Create a chat session
      const defaultConfig = { enabled: true, model: { name: 'gpt-4' }, provider: { name: 'openai' } };
      const chat = await aiClient.initChat('my-chat-config', context, defaultConfig);
      if (chat) {
        const response = await chat.invoke('What is the capital of France?');
        console.log(response.message.content);
      }
      
    • For more information about using the LaunchDarkly AI SDK, see the LaunchDarkly AI SDK documentation.

    Vercel AI Provider Installation

    Important: You will need to install additional provider packages for the specific AI models you want to use. The Vercel AI SDK requires separate packages for each provider.
    When creating a new Vercel AI model, LaunchDarkly uses an AI Config and the Vercel AI SDK's provider system to create a model instance. You should install all Vercel AI provider packages for each provider you plan to use in your AI Config to ensure they can be properly instantiated.

    Installing a Vercel AI Provider

    To use specific AI models, install the corresponding provider package:

    • For OpenAI models
      npm install @ai-sdk/openai --save
      # or
      yarn add @ai-sdk/openai
      

    For a complete list of available providers and installation instructions, see the Vercel AI SDK Providers documentation.

    Advanced Usage

    For more control, you can use the Vercel AI provider package directly with LaunchDarkly configurations:

    import { VercelProvider } from '@launchdarkly/server-sdk-ai-vercel';
    import { generateText } from 'ai';
    
    // Create a Vercel AI model from LaunchDarkly configuration
    const model = await VercelProvider.createVercelModel(aiConfig);
    // Convert LaunchDarkly messages and add user message
    const configMessages = aiConfig.messages || [];
    const userMessage = { role: 'user', content: 'What is the capital of France?' };
    const allMessages = [...configMessages, userMessage];
    // Track the model call with LaunchDarkly tracking
    const response = await aiConfig.tracker.trackMetricsOf(
      VercelProvider.getAIMetricsFromResponse,
      () => generateText({ model, messages: allMessages })
    );
    console.log('AI Response:', response.text);
    

    Contributing

    We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.

    About LaunchDarkly

    • LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
      • Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
      • Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
      • Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
      • Grant access to certain features based on user attributes, like payment plan (eg: users on the 'gold' plan get access to more features than users in the 'silver' plan).
      • Disable parts of your application to facilitate maintenance, without taking everything offline.
    • LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
    • Explore LaunchDarkly
      • launchdarkly.com for more information
      • docs.launchdarkly.com for our documentation and SDK reference guides
      • apidocs.launchdarkly.com for our API documentation
      • blog.launchdarkly.com for the latest product updates
    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly Android SDK observability plugin Early Access

    LaunchDarkly unveils Android observability in early access with plugins for errors, logs, tracing and session replay. Get started with configurable options, privacy masking and step by step setup to instrument your Android app with the observability SDKs.

    Overview

    The LaunchDarkly observability features are available for early access features in the LaunchDarkly UI are publicly available in early access.
    The observability SDKs, implemented as plugins for LaunchDarkly server-side and client-side SDKs, are designed for use with the in-app observability features. They are currently in available in Early Access, and APIs are subject to change until a 1.x version is released.
    If you are interested in participating in the Early Access Program for upcoming observability SDKs, sign up here.

    SDK quick links

    LaunchDarkly’s SDKs are open source. In addition to this reference guide, we provide source, API reference documentation, and a sample application:

    • SDK API documentation: Observability plugin API docs
    • GitHub repository: @launchdarkly/observability-android
    • Published module: Maven

    Prerequisites and dependencies

    This reference guide assumes that you are somewhat familiar with the LaunchDarkly Android SDK.
    The observability plugin is compatible with the Android SDK, version 5.9.0 and later.
    The LaunchDarkly Android SDK is compatible with Android SDK versions 21 and higher (Android 5.0, Lollipop).

    Get started

    Follow these steps to get started:

    • Install the plugin
    • Initialize the Android SDK client
    • Configure the plugin options
    • Configure additional instrumentations
    • Configure session replay
    • Explore supported features
    • Review observability data in LaunchDarkly

    Install the plugin

    LaunchDarkly uses a plugin to the Android SDK to provide observability.
    The first step is to make both the SDK and the observability plugin available as dependencies.
    Here’s how:

    implementation 'com.launchdarkly:launchdarkly-android-client-sdk:5.+'
    implementation 'com.launchdarkly:launchdarkly-observability-android:0.5.0'
    

    Then, import the plugin into your code:

    import com.launchdarkly.sdk.*;
    import com.launchdarkly.sdk.android.*;
    import com.launchdarkly.observability.plugin.Observability
    import com.launchdarkly.sdk.android.integrations.Plugin
    

    Initialize the client
    Next, initialize the SDK and the plugin.
    To initialize, you need your LaunchDarkly environment’s mobile key and the context for which you want to evaluate flags. This authorizes your application to connect to a particular environment within LaunchDarkly. To learn more, read Initialize the client in the Android SDK reference guide.

    Android observability SDK credentials
    The Android observability SDK uses a mobile key. Keys are specific to each project and environment. They are available from Project settings, on the Environments list. To learn more about key types, read Keys.
    Mobile keys are not secret and you can expose them in your client-side code without risk. However, never embed a server-side SDK key into a client-side application.

    Here’s how to initialize the SDK and plugin:

    LDConfig ldConfig = new LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(Components.plugins().setPlugins(
    Collections.singletonList<Plugin>(Observability(this.getApplication()))
    ))
    // other options
    .build();
    
    // You'll need this context later, but you can ignore it for now.
    LDContext context = LDContext.create("context-key-123abc");
    LDClient client = LDClient.init(this.getApplication(), ldConfig, context, 0);
    

    Configure the plugin options

    You can configure options for the observability plugin when you initialize the SDK. The plugin constructor takes an optional object with the configuration details.
    Here is an example:

    val ldConfig = new LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    Collections.singletonList<Plugin>(
    Observability(
    this@BaseApplication,
    Options(
    resourceAttributes = Attributes.of(
    AttributeKey.stringKey("serviceName"), "example-service"
    )
    )
    )
    )
    )
    .build();
    

    For more information on plugin options, read Configuration for client-side observability.

    Configure additional instrumentations

    To enable HTTP request instrumentation and user interaction instrumentation, add the following plugin and dependencies to your top level application’s Gradle file.

    plugins {
    id 'net.bytebuddy.byte-buddy-gradle-plugin' version '1.+'
    }
    dependencies {
    // Android HTTP Url instrumentation
    implementation 'io.opentelemetry.android.instrumentation:httpurlconnection-library:0.11.0-alpha'
    byteBuddy 'io.opentelemetry.android.instrumentation:httpurlconnection-agent:0.11.0-alpha'
    
    // OkHTTP instrumentation
    implementation 'io.opentelemetry.android.instrumentation:okhttp3-library:0.11.0-alpha'
    byteBuddy 'io.opentelemetry.android.instrumentation:okhttp3-agent:0.11.0-alpha'
    }
    

    Configure session replay

    The Android SDK supports session replay, which captures snapshots of your app’s UI at regular intervals. This allows you to visually review user sessions in LaunchDarkly to better understand user behavior and diagnose issues.
    To enable session replay, add the ReplayInstrumentation to the instrumentations list when configuring the observability plugin.
    Here’s how:

    import com.launchdarkly.observability.replay.ReplayInstrumentation
    import com.launchdarkly.observability.replay.ReplayOptions
    import com.launchdarkly.observability.replay.PrivacyProfile
    
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    resourceAttributes = Attributes.of(
    AttributeKey.stringKey("serviceName"), "example-service"
    ),
    instrumentations = listOf(
    ReplayInstrumentation()
    )
    )
    )
    )
    )
    .build()
    

    Session replay configuration options
    You can customize session replay behavior by passing a ReplayOptions object to the ReplayInstrumentation constructor:

    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = true,
    maskSensitive = true
    ),
    serviceName = "example-service",
    serviceVersion = "1.0.0",
    debug = false
    )
    )
    )
    )
    )
    )
    .build()
    

    The available configuration options are:

    • privacyProfile: Controls how UI elements are masked in the replay. To learn more, read Privacy options.
    • serviceName: A name for your service. Defaults to “observability-android”.
    • serviceVersion: Version of your service. Defaults to the SDK version.
    • backendUrl: The backend URL for sending replay data. Defaults to LaunchDarkly’s backend.
    • debug: Enables verbose logging when set to true. Defaults to false.

    Privacy options
    The PrivacyProfile class controls how UI elements are masked during session replay. Session replay for Android uses Jetpack Compose semantics to identify and mask UI elements. By default, all masking options are enabled to protect user privacy.
    Here’s how to configure privacy settings:

    import com.launchdarkly.observability.replay.PrivacyProfile
    import com.launchdarkly.observability.replay.MaskMatcher
    
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true
    )
    )
    )
    )
    )
    )
    .build()
    

    The available privacy options are:

    • maskTextInputs: When true, masks all text input fields including editable text and paste operations. Defaults to true.
    • maskText: When true, masks all text elements in the UI. Defaults to true.
    • maskSensitive: When true, masks sensitive views that contain password fields or text matching sensitive keywords. Defaults to true.

    Sensitive keywords
    When maskSensitive is enabled, the SDK automatically masks any Compose UI text or content descriptions containing predetermined keywords. Keywords you specify are not case sensitive. For the current set of keywords, read PrivacyProfile.

    Common privacy configurations
    For maximum privacy (recommended for production):

    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = true,
    maskSensitive = true
    )
    

    For debugging or development, you can turn masking off:

    privacyProfile = PrivacyProfile(
    maskTextInputs = false,
    maskText = false,
    maskSensitive = false
    )
    

    For selective masking, which masks inputs and sensitive data but shows regular text:

    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true
    )
    

    Custom masking with MaskMatcher
    You can implement custom masking logic using the MaskMatcher interface. This allows you to define your own rules for which UI elements should be masked.
    Here’s how:

    import androidx.compose.ui.semantics.SemanticsNode
    import androidx.compose.ui.semantics.SemanticsProperties
    import androidx.compose.ui.semantics.getOrNull
    import com.launchdarkly.observability.replay.MaskMatcher
    import com.launchdarkly.observability.replay.PrivacyProfile
    
    // Create a custom matcher that masks elements with specific test tags
    class CustomTestTagMatcher : MaskMatcher {
    override fun isMatch(node: SemanticsNode): Boolean {
    val testTag = node.config.getOrNull(SemanticsProperties.TestTag)
    return testTag == "sensitive-data" || testTag == "pii"
    }
    }
    
    // Use the custom matcher in your privacy profile
    val ldConfig = LDConfig.Builder(AutoEnvAttributes.Enabled)
    .mobileKey("mobile-key-123abc")
    .plugins(
    Components.plugins().setPlugins(
    listOf(
    Observability(
    this@BaseApplication,
    Options(
    instrumentations = listOf(
    ReplayInstrumentation(
    options = ReplayOptions(
    privacyProfile = PrivacyProfile(
    maskTextInputs = true,
    maskText = false,
    maskSensitive = true,
    maskAdditionalMatchers = listOf(CustomTestTagMatcher())
    )
    )
    )
    )
    )
    )
    .build()
    

    The MaskMatcher interface requires implementing a single method:

    • isMatch(node: SemanticsNode): Boolean - Returns true if the node should be masked, false otherwise.
      Custom matchers should execute synchronously and avoid heavy operations to prevent performance issues during screen captures.

    For more information on session replay configuration, read Configuration for session replay.

    Explore supported features
    The observability plugins supports the following features. After the SDK and plugins are initialized, you can access these from within your application:

    • Configuration for client-side observability
    • Configuration for session replay
    • Errors
    • Logs
    • Metrics
    • Tracing

    Review observability data in LaunchDarkly
    After you initialize the SDK and observability plugin, your application automatically starts sending observability data back to LaunchDarkly, including errors and logs. You can review this information in the LaunchDarkly user interface. To learn how, read Observability.

    Original source Report a problem
  • November 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Nov 20, 2025
    LaunchDarkly logo

    LaunchDarkly

    LaunchDarkly AI SDK for Server-Side JavaScript

    LaunchDarkly unveils the AI SDK for server‑side JavaScript in alpha, bringing AI config, TrackedChat, and provider integrations to the platform. Ready for quick setup with defaults, configuration retrieval, and built‑in metrics while using LangChain or custom providers.

    LaunchDarkly AI SDK for Server-Side JavaScript

    ⛔️⛔️⛔️⛔️
    Caution
    This library is a alpha version and should not be considered ready for production use while this message is visible.

    ☝️☝️☝️☝️☝️☝️

    LaunchDarkly overview
    LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!

    Quick Setup

    • This assumes that you have already installed the LaunchDarkly Node.js (server-side) SDK, or a compatible edge SDK.
    • Install this package with npm or yarn :
      npm install @launchdarkly/server-sdk-ai --save
      # or yarn add @launchdarkly/server-sdk-ai
      
    • Create an AI SDK instance:
      // The ldClient instance should be created based on the instructions in the relevant SDK.
      const aiClient = initAi(ldClient);
      

    Setting Default AI Configurations

    When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:

    • Fully Configured Default

      const defaultConfig = {
        enabled: true,
        model: {
          name: 'gpt-4',
          parameters: {
            temperature: 0.7,
            maxTokens: 1000
          }
        },
        messages: [
          {
            role: 'system',
            content: 'You are a helpful assistant.'
          }
        ]
      };
      
    • Disabled Default

      const defaultConfig = {
        enabled: false
      };
      

    Retrieving AI Configurations

    The config method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:

    const aiConfig = await aiClient.config(aiConfigKey, context, defaultConfig, {
      myVariable: 'My User Defined Variable'
    } // Variables for template interpolation
    );
    
    // Ensure configuration is enabled
    if (aiConfig.enabled) {
      const { messages, model, tracker } = aiConfig;
      // Use with your AI provider
    }
    

    TrackedChat for Conversational AI

    TrackedChat provides a high-level interface for conversational AI with automatic conversation management and metrics tracking:

    • Automatically configures models based on AI configuration
    • Maintains conversation history across multiple interactions
    • Automatically tracks token usage, latency, and success rates
    • Works with any supported AI provider (see AI Providers for available packages)

    Using TrackedChat

    • Use the same defaultConfig from the retrieval section above
      const chat = await aiClient.createChat('customer-support-chat', context, defaultConfig, {
        customerName: 'John'
      });
      
    • If (chat) {
      • Simple conversation flow - metrics are automatically tracked by invoke()
      • const response1 = await chat.invoke('I need help with my order');
        console.log(response1.message.content);
        const response2 = await chat.invoke("What's the status?");
        console.log(response2.message.content);
        
      • Access conversation history
      • const messages = chat.getMessages();
        console.log(`Conversation has ${messages.length} messages`);
        
      • }

    Advanced Usage with Providers

    For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:

    Using AI Provider Packages
    import { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain';
    const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
    // Create LangChain model from configuration
    const llm = await LangChainProvider.createLangChainModel(aiConfig);
    // Use with tracking
    const response = await aiConfig.tracker.trackMetricsOf(LangChainProvider.getAIMetricsFromResponse, () => llm.invoke(messages));
    console.log('AI Response:', response.content);
    
    Using Custom Providers
    import { LDAIMetrics } from '@launchdarkly/server-sdk-ai';
    const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
    // Define custom metrics mapping for your provider
    const mapCustomProviderMetrics = (response: any): LDAIMetrics => ({
      success: true,
      usage: {
        total: response.usage?.total_tokens || 0,
        input: response.usage?.prompt_tokens || 0,
        output: response.usage?.completion_tokens || 0,
      }
    });
    // Use with custom provider and tracking
    const result = await aiConfig.tracker.trackMetricsOf(mapCustomProviderMetrics, () => customProvider.generate({
      messages: aiConfig.messages || [],
      model: aiConfig.model?.name || 'custom-model',
      temperature: aiConfig.model?.parameters?.temperature ?? 0.5,
    }));
    console.log('AI Response:', result.content);
    

    Contributing

    We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.

    About LaunchDarkly

    • LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
      • Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
      • Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
      • Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
      • Grant access to certain features based on user attributes, like payment plan (eg: users on the ‘gold’ plan get access to more features than users in the ‘silver’ plan).
      • Disable parts of your application to facilitate maintenance, without taking everything offline.
    • LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
    • Explore LaunchDarkly
      • launchdarkly.com for more information
      • docs.launchdarkly.com for our documentation and SDK reference guides
      • apidocs.launchdarkly.com for our API documentation
      • blog.launchdarkly.com for the latest product updates
    Original source Report a problem

Related vendors