LaunchDarkly Release Notes
Last updated: Jan 23, 2026
- Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Jan 23, 2026
What's new
LaunchDarkly bundles new guides on the developer toolbar and experiment methodologies with a .NET SDK 8.11 update. It also adds deep linking for sessions, Android privacy tweaks, and improved testing flag previews.
Release notes
January 13, 2026: Publishes a topic on using the developer toolbar. Affected topics: Using the LaunchDarkly developer toolbar
January 12, 2026: Publishes a topic about choosing a statistical methodology for experiments. Affected topics: Choosing a statistical methodology
January 12, 2026: Updates the data saving mode EAP topic with information about using the .NET (server-side) SDK version 8.11. Affected topics: Data saving mode
January 9, 2026: Adds documentation for deep linking to session search queries and linking to specific sessions by ID with timestamps. Affected topics: Session replay
January 9, 2026: Updates Android observability SDK privacy settings: renames maskSensitive to maskBySemanticsKeywords, changes maskText default to false, and removes maskAdditionalMatchers option. Affected topics: Android SDK observability reference, Configuration for session replay
January 7, 2026: Updates the testing flag changes topic with information about previewing the percentage of contexts that will receive a variation. Affected topics: Testing changes to flag targeting
- December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 19, 2025
Observability settings
LaunchDarkly launches early access observability features for sessions, errors, logs and traces. Enable observability from the billing page and tune filtering, rage-click sensitivity, sourcemaps, and auto-resolve with rule-based ingestion controls.
This feature is in Early Access
LaunchDarkly’s observability features are publicly available in early access. Enable observability in the billing page.
They currently require the LaunchDarkly observability SDKs and the JavaScript, React Web, or Vue SDK.
If you are interested in participating in the Early Access Program for our upcoming observability plugins for server-side SDKs, sign up here.Overview
This topic describes the project-level settings available for sessions, errors, logs, and traces.
In the left navigation of the LaunchDarkly UI, expand Observe to view them.
To view or update project-level settings for these features:- Click the project dropdown to open the project menu.
- Select Project settings.
- Click Observability. The Observability settings page appears.
The following sections describe the available settings.
Session settings
You can configure the following settings for sessions in your project:
- Excluded users. This setting excludes sessions from particular end users, based on their context key or email address.
- Rage clicks. These settings adjust the sensitivity for detecting “rage clicks,” or occasions when end users repeatedly click an element in your application, indicating frustration. You can set the Elapsed time, Radius, and Minimum clicks. These settings control whether a search for session replays that uses the has_rage_clicks attribute will return a given session. By default, LaunchDarkly considers end-user activity a rage click when there exists a two-second or longer period in which an end user clicks five or more times within a radius of eight pixels.
Click Save to save your settings.
Error settings
You can configure the following settings for errors in your project:
- Sourcemaps. If you have uploaded sourcemaps, you can view them here.
- Auto-resolve stale errors. When enabled, this setting automatically sets the status of an error to “Resolved” after the time period you select.
Click Save to save your settings.
Filters
Filters help you manage the ingestion of sessions, errors, logs, or traces that you send to LaunchDarkly. This is useful if you know certain signals which are not relevant to your application or are not actionable. Any excluded signals do not count against your observability quotas.
To configure ingestion filters:- Navigate to the Observability project settings page.
- From the Filters section, click Edit next to the type of signal you want to configure.
- (Optional) Configure filter rules to manage ingestion of sessions, errors, logs, or traces.
- (Optional) Set the Max ingest per minute. This setting rate limits the maximum number of data points ingested in a one-minute window. For example, you may configure a rate limit of 100 per minute. This lets you limit the number of data points recorded in case of a significant spike in use of your application.
- Click Save.
Rule evaluation order
Rules are evaluated in order, from top to bottom. Drag and drop the rules to reorder them to fit your project’s needs. The first enabled rule that matches the criteria applies its filter operation and rate.
Rules
To add a filter rule:
- Click Add rule.
- Set a rule name.
- Review the filter rule operation. Exclusion rules are used for sessions, errors, and logs. Inclusion rules are used for traces. You cannot change these settings.
- Set a query:
- Click the Filter… placeholder and select an attribute from the dropdown. For example, you can filter sessions based on active_length.
- Select an operator from the dropdown. For example, you can filter by greater than, >.
- Enter a value for your expression. For example, you can enter 8s for eight seconds.
- Set the rules rate (%). For each signal that LaunchDarkly receives, it makes a randomized decision according to the rules rate whether to apply the include or exclude filter operation.
- For example, if an exclusion rule has a 20% rate, then 20% of the signals that match the rule’s query are excluded and the remaining 80% are included.
- Set the rule On or Off to enable or disable the rule.
- Click Save.
Records with no matching rules
If a signal does not match any rules query, then it is included by LaunchDarkly.
Here is an example of multiple log filter rules:
An example of multiple log filter rules.Here is how rule order controls rule evaluation:
- Logs with level=ERROR and service_name=example-service are always included, the first rule matches, therefore the second and third rule are not reached.
- Logs with level=DEBUG and service_name=example-service are always excluded, the first rule is skipped, the second rule matches, therefore the third rule is not reached.
- Logs with level=INFO and service_name=example-service are 80% excluded and 20% included, the first and second rule are skipped, and the third rule matches.
- Logs with level=INFO and service_name=new-service are always included, all three rules are skipped, therefore the log is ingested.
All of your release notes in one feed
Join Releasebot and get updates from LaunchDarkly and hundreds of other software products.
- December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 16, 2025
Meet the new navigation in LaunchDarkly
LaunchDarkly unveils a cleaner, more focused navigation with collapsible sections, simplified visuals, faster shortcuts, a refined Create action, and improved search. The update reduces visual noise and helps teams move faster, now available to all users.
Here’s what’s new
A cleaner, more focused navigation reduces noise and helps you move faster.
It’s been about a year and a half since we introduced a new and improved LaunchDarkly experience, a major redesign that unified environments, improved navigation, and created more clarity across the app.
Since then, our platform has expanded, and our navigation has expanded with it. Over time, the number of items and icons multiplied, and things started to feel a little overwhelming. Our customers provided consistent commentary:
“I can’t see what matters most.”
“The shortcuts are buried.”
“It’s powerful, but it’s a lot.” We prioritized a refresh based on this feedback.This update makes the navigation cleaner, more focused, and better aligned with how you actually work. It helps keep what’s important in view and gives you back control of your screen real estate.
Here’s what’s new:
- Collapsible sections so you can keep open the sections you interact with the most and hide what you don’t. Your layout will stay exactly how you leave it, even on page refresh.
- Simplified visuals with fewer icons and improved spacing, making it easier to scan the navigation and find what you’re looking for.
- Shortcuts moved up for faster access to your most frequently visited flags. If you haven’t tried this feature before, Shortcuts let you bookmark filtered views of your flags dashboard for quick access to the flags you work with most.
- A refined Create action that remains easy to find but no longer competes with key actions on the page.
- An improved search experience that offers quick, keyboard-friendly access to any part of the platform.
These changes are now available to all LaunchDarkly users.
This work builds on the progress we’ve made over the past year and a half. It reduces visual noise, adds flexibility, and helps teams move faster and stay focused. We’re excited for you to experience it, and we’d love to hear what you think. Share your reaction with us at [email protected].
Original source Report a problem - December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 12, 2025
Flag lifecycle settings
LaunchDarkly adds configurable flag lifecycle settings to control when flags are archived. Changes apply project-wide and take effect immediately, with Insights showing pre and post-update data. Learn how to set minimum age, prerequisites, variations, and archiving scope.
Overview
This topic describes flag lifecycle settings and how to update them. Flag lifecycle settings let you customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived. Archiving flags is a good practice to help clean up flags you no longer need.
Custom lifecycle settings apply only to critical environments. LaunchDarkly will not evaluate non-critical environments for a flag’s archive-readiness.Update lifecycle settings
When you update the Lifecycle settings for a project, the changes apply to all flags in your project. This means some flags previously considered ready to archive may no longer be ready to archive, and some flags previously considered not yet ready to archive may now be ready. On the Flags list, these changes apply immediately.
In Launch Insights, data from before you update the settings reflects your older definition, and data from after you update the settings reflects your newer definition.To customize flag lifecycle settings:
- Click the project dropdown. The project menu appears:
The project menu. - Click Project settings.
- Click Lifecycle settings. The flag lifecycle settings panel appears:
The flag lifecycle settings panel. - Customize the criteria that LaunchDarkly uses to determine when flags are ready to be archived:
- Select the Minimum flag age. We recommend setting this to at least as long as it takes to release most features in your critical environments.
- Select Before code removal, there should be no targeting changes for at least to set how long a flag’s targeting rules must be unchanged before the flag is ready to archive.
- Select Before archiving, there should be no evaluations for at least to set how long a flag should have no evaluations before the flag is ready to archive.
- Select whether a flag may a prerequisite. We recommend checking the Must not be a prerequisite for other flags checkbox.
- Select whether flags may serve one or many variations and still be ready archive. We recommend checking the Must be serving one variation checkbox.
- Select whether the archival checks apply to all flags or only temporary flags. We recommend checking the Must be temporary checkbox.
- Click Save.
To reset the flag lifecycle settings, navigate to the flag lifecycle settings panel and select Reset to default.
Original source Report a problem - Dec 9, 2025
- Date parsed from source:Dec 9, 2025
- First seen by Releasebot:Dec 12, 2025
Collecting user feedback in your app with feature flags
LaunchDarkly unveils a built in user feedback tool that ties sentiment and session data to feature flags. The guide walks you through enabling tracking, adding a feedback UI, and viewing results in the dashboard for faster data driven decisions.
If you’re a builder, you understand that it’s crucial to observe how users respond to new features, interface changes, and experimental variations.
Feedback is an essential part of the product development lifecycle, helping teams validate decisions and iterate faster. The new user feedback tool makes it convenient and possible for you and your team to enable metric tracking for specific feature flags within minutes.
This tutorial will guide you to enable and view valuable user feedback metrics directly from the LaunchDarkly dashboard. A sample app will be provided for you to clone and follow along on your own machine, but you can skip over to the “Install the SDK and implement user feedback function” if you prefer to use your own app. This features is available for JavaScript & TypeScript client-side SDKs. React components are also provided for convenient integration.Why you should think about this today
Learning how to track important metrics such as user feedback is often overlooked. It can also be a struggle to connect user sentiment directly to a feature flag or existing experimentation.
Perhaps your problem might be relying on generic feedback tools that lack context about which features users are experiencing. Ideally you would want to see a list of feedback tied to a feature that has been shipped behind a flag.
Don’t spend more time guessing where a user dropped off when this new LaunchDarkly feature can provide more contextual feedback and live session replay. The qualitative feedback helps narrow down the decision making process so that your team can ship features faster, catch issues early, or roll back a variation and adjust a specific prompt. It is also a great way to understand how different users behave - whether you’re working with gamers, beta testers, premium users, or internal users.Requirements
- LaunchDarkly account. If you haven’t done so already, create a free account
- An app that uses client side flags. A sample app will be provided below
- Modern node version. I would recommend the node version manager (nvm) if you don’t have node installed already
- Bonus: LaunchDarkly feature flags set up and enabled already
If you would like to follow along with this tutorial with our starter code, go ahead and clone this project:
$ git clone [email protected]:launchdarkly-labs/ld-feedback-tutorial.gitCreate the feature flag
If you and your team do not have an existing feature flag already, go to the LaunchDarkly dashboard and create your first feature flag. Give it a name such as “feedback demo”.
You will be directed to the dashboard to configure and toggle your flag ON. After you’ve done so, head back to your developer environment. Make an .env file using the commandcp .env.example .envand add the following lines:
$ VITE_LAUNCHDARKLY_CLIENT_SIDE_ID=XXXXXXXX > VITE_OBSERVABILITY_PROJECT_ID=”<YOUR_PROJECT_NAME_ON_LAUNCHDARKLY>”Be sure to replace the values above with the project name on the LaunchDarkly dashboard and the client-side ID from the Test environment, as seen in the screenshot below.
Install the SDK and implement user feedback function
Click on the Feedback section at the top of the page.
Click on the Feedback tab at the top.
Click on View setup guide button to find the SDK to copy and paste for your project.
In order to keep your working directory organized, make a new file named sendFeedback.ts in the src subdirectory. Copy and paste the SDK from the feedback setup guide to this new file.
This module sends the user feedback to LaunchDarkly using the client side SDK. The LaunchDarkly client then tracks these events for the developer to analyze relevant data such as sentiment, feedback, prompt, and user’s session data.Create the feedback modal file
Here’s the fun part - implementing the buttons and functions necessary to collect feedback from users.
Create a new file within the src subdirectory and name it feedbackPopover.tsx.
Click on the React examples tab of the setup guide to access the code snippet. This tutorial will implement a feedback popover that allows users to type in a message and provide a thumbs up or thumbs down to describe their experience.
Let’s alter the FeedbackPopover function a bit. Scroll down to function and add an export keyword in the beginning so that function can be executed in the main application.Navigate back to the app.tsx file to import the newly created file:
import { FeedbackPopover } from './feedbackPopover.tsx'Notice the comment for the FeedbackPopover component that utilizes the ldClient. Copy the line and scroll down to the bottom of the page where the code returns JSX. Replace the placeholder comment {/* Add the feedback here */} with the FeedbackPopover class so it looks like the code snippet below:
return ( <> ... <div> <h2>Send Feedback Below</h2> <div> <FeedbackPopover ldClient={client} /> </div> </div> </> );Great! Now the FeedbackPopover button will appear on the website. This feedback widget can be customized later on so that you can collect user feedback tied to feature flag variations.
This React file defines reusable, stateless icon components. The feedback function visibility is controlled by the isOpen prop. The user’s feedback is stored, and a sentiment is determined accordingly, and no input validation is required.Enable the client SDK in production environments
If you are working in the Production environment, you might be prompted to enable the JavaScript Client SDK to access the flag’s key.
You can toggle this on and off in the flag’s dashboard in the bottom right hand corner under the Advanced controls section.Test out the user feedback function
Open up a terminal window and install the dependencies in the working directory with npm i. Run the dev server with npm run dev.
You should get output like this:$ VITE v6.0.11 ready in 413 ms ➜ Local: http://localhost:5173/ ➜ Network: use --host to expose ➜ press h + enter to show helpOpen up the http://localhost:5173/ page to see the following screen:
Run flag evaluations
The starter code is presented to you so all you have to do is try it out! If you named your flag “feedback demo”, the flag key here should already be set. If not, you’ll want to change this code to reflect your flag key name.
Click the Feedback button and add some sample data to make sure the app is working. I’ll make 3 evaluations to populate the flag with data to examine.
Wait a minute before you can see your evaluation and feedback results in the dashboard.Assess the user feedback on dashboard
Confirm that the feedback went through. Go back to the Feedback dashboard and refresh the page. See the latest comment that was created and how the sentiment was analyzed.
To see the evaluations, switch over to the Audience tab.
The nifty part of this new feature is examining the data. LaunchDarkly allows you to filter by sentiments, variations, and a specific timeframe.
However, filtering by different dates will only apply to the variations statistics in the chart above and the table below.
Data regarding the sentiment distribution and valuations are only shown in the table.
Go ahead and play around with the dashboard to filter the data accordingly to the project’s needs.Create different sessions and customize user behavior
Every project has a different use case - perhaps your audience plays video games, or they are a subscribing member. You can create different contexts and customize them specifically for your use case.
LaunchDarkly also allows you to create different sessions to observe the user’s behavior.
Click on a user key and see the user context.Observe the session replay
If you have observability enabled, then you can see how the person interacts with your website and watch them enter the feedback. Click on the blue play button under the “Session” column and observe how the user interacts with the website before sending in a comment for feedback evaluation.
What’s next for collecting metrics and user feedback?
Congratulations on taking the next step in building more informed, user-driven experiences. The built in qualitative user feedback tool allows you and your team to quickly and conveniently enable and track metrics for specific feature flags without interrupting your development process.
Original source Report a problem
Now, every feature rollout can capture both measurable impact and meaningful user sentiment, helping your team make faster, smarter decisions backed by real insight. It’s time to formulate better questions to learn from these new features.
Send me an email at [email protected], connect with me on LinkedIn, or follow @diane.dot.dev on social media to let us know what you’re building. - December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 5, 2025
LLM playground
LaunchDarkly unveils a secure LLM playground for pre production testing. Create, run, and compare evaluations with automatic scoring in a sandbox, manage API keys, view detailed run results, and prep models before production deployment.
Overview
This topic explains how to use the LLM playground to create and run evaluations that measure the quality of model outputs before deployment. The playground provides a secure sandbox where you can experiment with prompts, models, and parameters. You can view model outputs, attach evaluation criteria to assess quality, and use a separate LLM to automatically score or analyze completions according to a rubric you define.
The playground helps AI and ML teams validate quality before deploying to production. It supports fast, controlled testing so you can refine prompts, models, and parameters early in the development cycle. The playground also establishes the foundation for offline evaluations, creating a clear path from experimentation to production deployment within LaunchDarkly.
The playground complements online evaluations. Online evaluations measure quality in production using attached judges. The playground focuses on pre-production testing and refinement.
Who uses the playground
The playground is designed for AI developers, ML engineers, product engineers, and PMs building and shipping AI-powered products. It provides a unified environment for evaluating models, comparing configurations, and promoting the best-performing variations.
Use the playground to:
- Create and run evaluations that test model outputs.
- Measure model quality using criteria such as factuality, groundedness, or relevance.
- Use an evaluator LLM to automatically score or analyze completions.
- Adjust prompts, parameters, or variables to improve performance.
- Manage and secure provider credentials in the Manage API keys section.
Each evaluation can generate multiple runs. When you change an evaluation and create a new run, earlier runs remain available with their original data.
How the playground works
The playground uses the same evaluation framework as online evaluations but runs evaluations in a controlled sandbox. Each evaluation contains messages, variables, model parameters, and optional evaluation criteria. When you run an evaluation, the playground records the model response, token usage, latency, and scores for each criterion.
Teams can define reusable evaluations that combine prompts, models, parameters, and variables or context. You can run each evaluation to generate completions and view structured results. You can also attach a secondary LLM to automatically score or analyze each response.
Data in the playground is temporary. Test data is deleted after 60 days unless you save the evaluation. LaunchDarkly integrations securely store provider credentials and remove them at the end of each session.
Each playground session includes:
- Evaluation setup: messages, parameters, variables, and provider details
- Run results: model outputs, token counts, latency, and evaluation scores
- Isolation: evaluations cannot modify production configurations
- Retention: data expires after 60 days unless you save the evaluation
When you click Save and run, LaunchDarkly securely sends your configuration to the model provider and returns the model output and evaluation results as a new run.
Example structured run output
{ "accuracy": { "score": 0.9, "reason": "Accurate and complete answer." }, "groundedness": { "score": 0.85, "reason": "Mostly supported by source context." }, "latencyMs": 1200, "inputTokens": 420, "outputTokens": 610 }Create and manage evaluations
You can use the playground to create, edit, and delete evaluations. Each evaluation can include messages, model parameters, criteria, and variables.
Create an evaluation
- Navigate to your project.
- In the left navigation, click Playground.
- Click New evaluation. The “Input” tab opens.
- Click Untitled and enter a name for the evaluation.
- Select a model provider and model.
- Add or edit messages for the System, User, and Assistant roles. These messages define how the model interacts in a conversation:
- System provides context or instructions that set the model’s behavior and tone.
- User represents the input prompt or question from an end user.
- Assistant represents the model’s response. You can include an example or leave it blank to view generated results.
- Attach one or more evaluation criteria. Each criterion defines a measurement, such as factuality or relevance, and includes configurable options such as threshold or control prompt.
- (Optional) Add variables to reuse dynamic values, such as {{productName}} or context attributes like {{ldContext.city}}.
- (Optional) Attach a scoring LLM to automatically evaluate each output.
- Click Save and run. The playground creates a new run and adds an output row with model response and evaluation scores.
Edit an evaluation
You can edit an evaluation at any time. Changes apply to new runs only. Earlier runs retain their original data.
To edit an evaluation:
- In the Playground list, click the evaluation you want to edit.
- Update messages, model, parameters, variables, or criteria.
- Click Save and run to generate a new run with updated evaluation data.
Delete an evaluation
To delete an evaluation:
- In the Playground list, find the evaluation you want to delete.
- Click the three-dot overflow menu.
- Click Delete evaluation and confirm.
Deleting an evaluation removes its configuration and associated runs from the playground.
View evaluation runs
The Output tab shows all runs for an evaluation.
Each run includes:
- Evaluation summary
- Scores for each criterion
- Input, output, and total tokens used
- Latency
Select a run to view:
- Raw output: the exact text or JSON object returned by the model
- Evaluation results: scores and reasoning for each evaluation criterion
Runs update automatically when new results are available.
Manage API keys
The playground uses the provider credentials stored in your LaunchDarkly project to run evaluations. You can add or update these credentials from the Manage API keys section to ensure your evaluations use the correct model access.
To manage provider API keys:
- In the upper-right corner of the playground page, click Manage API keys to open the “Integrations” page with the “AI Config Test Run” integration selected.
- Click Add integration.
- Enter a name.
- Select a model provider.
- Enter the API key for your selected provider.
- Read the Integration Terms and Conditions and check the box to confirm.
- Click Save configuration.
Only one active credential per provider is supported per project. LaunchDarkly does not retain API keys beyond the session.
Privacy
The playground may send prompts and variables to your configured model provider for evaluation. LaunchDarkly does not store or share your inputs, credentials, or outputs outside your project.
If your organization restricts sharing personal data with external providers, ensure that prompts and variables exclude sensitive information.
To learn more, read AI Configs and information privacy.
Original source Report a problem - December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 1, 2025
Introducing Audiences: See who your flags are really impacting
LaunchDarkly unveils Flag Audiences and session replay, letting you see who evaluated a flag, filter by variation, and replay sessions from the Observability SDK. This ties evaluations to users and observability data for faster incident resolution. Available to Guardian customers for Guarded Releases and standard rollouts.
Audience view provides instant visibility into who evaluated your feature and what they experienced.
TL;DR
- LaunchDarkly now lets you see who evaluated your flag with the new Flag Audiences view.
- View all evaluations for a flag, switch between context kinds, and filter by variation.
- Customers using the Observability SDK can access session replays to see exactly what users experienced, without leaving LaunchDarkly.
Introducing Flag Audiences and Sessions
When incidents happen or performance drops after a rollout, one of the first questions teams ask is: “Who was impacted?” Until now, finding that answer meant manually cross-referencing logs, traces, and flag histories across multiple tools, which can be slow and error-prone. That’s why we’re excited to announce Audiences, a new capability that connects feature flag evaluations directly to the users and sessions behind them. This lets you trace impact in real time, link flags to observability data, and resolve issues faster.
Know exactly who saw your feature
When a flag changes, LaunchDarkly now shows you who evaluated it, whether that’s a specific user, account, or device. From the new Audience tab in your flag’s dashboard, you can:
- View all users or contexts that evaluated the flag.
- Filter by variation (control, treatment, or custom).
- See when they last evaluated the flag.
- (If you have the Observability SDK downloaded) Watch their most recent session replay to understand what happened before and after the evaluation.
This gives you a full, traceable record of flag activity and an actionable view of your rollout audience.
Example: Investigating a regression
Imagine you’ve rolled out a new feature behind a flag, and your monitoring system reports an increase in 500 (Internal Server) errors. With Audience, you can open the flag, filter for users served the “treatment” variation, and instantly see which sessions encountered errors. You can even replay those sessions to see what actions led up to the issue, all from within LaunchDarkly.
Why knowing your flag audience matters
Benefits of the Audience feature include:
- Faster incident resolution. Quickly identify which users or sessions were affected by a flag change and why, with no manual log digging or tool switching required.
- Deeper visibility, fewer silos. See flag evaluations, user sessions, and observability data together in one unified view for instant context during investigations.
- Smarter collaboration across teams. Give SREs, engineers, and PMs a shared source of truth for debugging, postmortems, and release validation.
The Audience view is now available to all LaunchDarkly Guardian customers for both Guarded Releases and regular flag rollouts.
Want to see the modern way of shipping code for yourself? Learn how Guarded Releases works.
Original source Report a problem - Dec 1, 2025
- Date parsed from source:Dec 1, 2025
- First seen by Releasebot:Jan 23, 2026
Introducing Audiences: See who your flags are really impacting
Introducing Audiences: See who your flags are really impacting
Instantly trace who saw your flag and what happened next.
Rachel Groberman
Original source Report a problem - Dec 1, 2025
- Date parsed from source:Dec 1, 2025
- First seen by Releasebot:Dec 7, 2025
Playgrounds for AI Configs
LaunchDarkly launches Playgrounds to test and compare AI Configs without code. Define reusable evaluations with prompts, models, and variables, run on demand, and auto score with a separate LLM. Future updates promise bulk evaluations and dataset uploads.
Learn more
You can now use Playgrounds in LaunchDarkly to quickly test and compare AI Configs without writing any custom code. Playgrounds let teams define reusable evaluations that bundle prompts, models, parameters, and variables, then run them on demand to generate completions and inspect results in a structured, repeatable way.
Playgrounds also support automatic scoring: attach a separate LLM to evaluate each completion using your own rubric (for example, correctness, relevance, or toxicity). This shortens the iteration loop and makes it easier to understand which configuration performs best before you roll it out.
Future updates will include bulk evaluations, dataset uploads, and more advanced comparison tools, all powered by the same evaluation service underlying Playgrounds.
Docs
Learn more
Docs
Original source Report a problem - November 2025
- No date parsed from source.
- First seen by Releasebot:Nov 20, 2025
LaunchDarkly AI SDK for Server-Side JavaScript
LaunchDarkly unveils the AI SDK for server‑side JavaScript in alpha, bringing AI config, TrackedChat, and provider integrations to the platform. Ready for quick setup with defaults, configuration retrieval, and built‑in metrics while using LangChain or custom providers.
LaunchDarkly AI SDK for Server-Side JavaScript
⛔️⛔️⛔️⛔️
Caution
This library is a alpha version and should not be considered ready for production use while this message is visible.☝️☝️☝️☝️☝️☝️
LaunchDarkly overview
LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!Quick Setup
- This assumes that you have already installed the LaunchDarkly Node.js (server-side) SDK, or a compatible edge SDK.
- Install this package with npm or yarn :
npm install @launchdarkly/server-sdk-ai --save # or yarn add @launchdarkly/server-sdk-ai - Create an AI SDK instance:
// The ldClient instance should be created based on the instructions in the relevant SDK. const aiClient = initAi(ldClient);
Setting Default AI Configurations
When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:
Fully Configured Default
const defaultConfig = { enabled: true, model: { name: 'gpt-4', parameters: { temperature: 0.7, maxTokens: 1000 } }, messages: [ { role: 'system', content: 'You are a helpful assistant.' } ] };Disabled Default
const defaultConfig = { enabled: false };
Retrieving AI Configurations
The config method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:
const aiConfig = await aiClient.config(aiConfigKey, context, defaultConfig, { myVariable: 'My User Defined Variable' } // Variables for template interpolation ); // Ensure configuration is enabled if (aiConfig.enabled) { const { messages, model, tracker } = aiConfig; // Use with your AI provider }TrackedChat for Conversational AI
TrackedChat provides a high-level interface for conversational AI with automatic conversation management and metrics tracking:
- Automatically configures models based on AI configuration
- Maintains conversation history across multiple interactions
- Automatically tracks token usage, latency, and success rates
- Works with any supported AI provider (see AI Providers for available packages)
Using TrackedChat
- Use the same defaultConfig from the retrieval section above
const chat = await aiClient.createChat('customer-support-chat', context, defaultConfig, { customerName: 'John' }); - If (chat) {
- Simple conversation flow - metrics are automatically tracked by invoke()
const response1 = await chat.invoke('I need help with my order'); console.log(response1.message.content); const response2 = await chat.invoke("What's the status?"); console.log(response2.message.content);- Access conversation history
const messages = chat.getMessages(); console.log(`Conversation has ${messages.length} messages`);- }
Advanced Usage with Providers
For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:
Using AI Provider Packages
Using Custom Providersimport { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain'; const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue); // Create LangChain model from configuration const llm = await LangChainProvider.createLangChainModel(aiConfig); // Use with tracking const response = await aiConfig.tracker.trackMetricsOf(LangChainProvider.getAIMetricsFromResponse, () => llm.invoke(messages)); console.log('AI Response:', response.content);import { LDAIMetrics } from '@launchdarkly/server-sdk-ai'; const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue); // Define custom metrics mapping for your provider const mapCustomProviderMetrics = (response: any): LDAIMetrics => ({ success: true, usage: { total: response.usage?.total_tokens || 0, input: response.usage?.prompt_tokens || 0, output: response.usage?.completion_tokens || 0, } }); // Use with custom provider and tracking const result = await aiConfig.tracker.trackMetricsOf(mapCustomProviderMetrics, () => customProvider.generate({ messages: aiConfig.messages || [], model: aiConfig.model?.name || 'custom-model', temperature: aiConfig.model?.parameters?.temperature ?? 0.5, })); console.log('AI Response:', result.content);Contributing
We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.
About LaunchDarkly
- LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
- Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
- Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
- Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
- Grant access to certain features based on user attributes, like payment plan (eg: users on the ‘gold’ plan get access to more features than users in the ‘silver’ plan).
- Disable parts of your application to facilitate maintenance, without taking everything offline.
- LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
- Explore LaunchDarkly
- launchdarkly.com for more information
- docs.launchdarkly.com for our documentation and SDK reference guides
- apidocs.launchdarkly.com for our API documentation
- blog.launchdarkly.com for the latest product updates