- Sep 18, 2025
- Parsed from source:Sep 18, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Release Policies (Early Access), Observability in self-serve, and Logs and Traces improvements
LaunchDarkly rolls out Release Policies in Early Access to simplify default releases, adds Observability in self-serve plans, and ships major Logs & Traces improvements (Live Mode, editable graphs) plus UI refreshes and bug fixes.
We’ve introduced Release Policies in Early Access to make Guardian easier to use as the default release method, one of the top requests from Guardian customers. The first version lets you:
- Define a scope for a policy (currently environments, with more options coming)
- Set a preferred release method (for example: “On production, Guarded Releases is the default”)
- Choose automatic or manual rollbacks
- Configure minimum sample size requirements (helpful for lower-traffic environments)
Release Policies will expand over the next few months alongside Guardrail Metrics, Global Metric Thresholds, and Release Templates, working toward our goal of zero-configuration Guardian rollouts. Early Access is limited; reach out to your account team if you’d like to participate.
Observability now available in self-serve plans
Observability (Session Replays, Errors, Logs, Traces) is now included in Foundation self-serve plans on a usage-based model. New customers can now add Observability directly in checkout — no sales conversation required — and existing customers will also have access. Highlights:
- New trial onboarding flow for observability plug-ins and SDK setup
- Updated pricing page with a usage calculator
- Checkout support for usage-based or annual commit billing With Observability in self-serve plans, teams can quickly debug issues, monitor performance, and get more value from feature flags — all without leaving LaunchDarkly. See the documentation to get started, or if you are self-serve customer, add observability in the checkout
Logs and Traces improvements
We’ve shipped two major updates to make debugging and analysis in Logs and Traces faster and more powerful:
- Live Mode: stream events in real time, no manual refresh required. Perfect for validating new instrumentation or monitoring fast-moving incidents.
- Editable graph views: create and edit graphs directly in Logs and Traces without setting up a dashboard first. Switch graph types, run aggregations, and add up to two custom graphs per view.
Together, these updates help you move from “something looks off” to “here’s what’s happening and where” with fewer clicks, less overhead, and more confidence in what you’re seeing. Learn more in the Logs and Traces documentation.
Other improvements
- Refreshed the Metric Groups UI with standardized components for a cleaner experience
- Added a new metric details tab showing configuration and connections upfront
- Introduced a tabbed connections table that splits connections across Experiments, Guarded Releases, and Metric Groups
- Multi-armed bandits dashboard refresh
- Refinements to the holdout builder and results view for Experimentation
- Improved metric accuracy by including events only from the selected environment
Bug fixes
- Updated error notifications for context kind creation
- Fixed a bug with newly created rules in AI Configs targeting
- Fixed display of empty stack traces
- Fixed metric preview edge cases where event data could show incomplete results
- Sep 4, 2025
- Parsed from source:Sep 4, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Relative difference charts, custom metric filters, AI Config approval updates, and more
Guarded Releases adds relative difference charts, custom-metric filtering, smarter OTel filtering, AI Config approvals, and multiple comparison corrections. Also brings a System theme, Scoped Go SDK, an experimentation sandbox, visualization tweaks, and bug fixes.
Relative difference charts
We’ve added relative difference charts to Guarded Releases, making it easier to see how metrics shift during a rollout. Instead of abstract probabilities, you’ll now see a direct comparison of your metric across control and treatment variations, answering the real question: “Did errors go up compared to before?” This view is more intuitive, familiar to analysts and engineers, and makes Guardian rollouts easier to understand or explain to your team.
Filters for custom metric events
We’ve made it simple to measure what matters. You can now filter custom metric events by event properties or context attributes directly in LaunchDarkly. For example: instead of instrumenting a separate error event for each page type, you can send one (error_event) with (page_type) as a property and then define distinct metrics per page right in the LaunchDarkly app. This update simplifies instrumentation, speeds up metric creation, and adds flexibility across both Release Guardian and Experimentation.Check out the documentation or blog post to get started.
Smarter data filtering in OTel and Observability SDKs
Sending all observability data can be expensive and noisy. With inExperiment filtering, you can now configure your OTel collector or Observability SDKs to send data only for flags running a Guarded Release or Experiment. This reduces cost, avoids dual-shipping unnecessary data, and makes monitoring more efficient at scale. Recommended collector configuration is available in our documentation.
Approvals updates for AI Configs
You can now require reviews and approvals for changes to AI Config variations and targeting rules before they go live. Approvals help keep AI behavior safe, transparent, and aligned with team standards. They make it easier to collaborate with teammates, add accountability to high-stakes updates, and stay informed with notifications on requests and decisions. Get started with AI Config Approvals in our documentation
Multiple comparison corrections
Experimentation results are now more statistically rigorous. LaunchDarkly automatically applies multiple comparison corrections when analyzing experiments with many metrics or variations. This reduces false positives and ensures your reported wins are reliable even in complex tests. Now teams can move faster with confidence, knowing their decisions are backed by best-practice statistical methods. Get started with multiple comparison corrections in our guide.
Other improvements
- Added a new “System” theme option that auto-matches your operating system preference
- Introduced Scoped Clients in the Go SDK for more flexible client management
- Released Experimentation demo sandbox for safer, hands-on testing
- Optionally hide child spans in the Traces view
- Configurable visualization types in Logs and Traces views
- Display badge for external data sources on metrics list
Bug fixes
- Fixed display of roles with nil base permissions in UI
- Fixed a bug with newly created rules in AI Configs targeting
- Fixed display of empty stack traces
- Fixed unintentional text wrapping on clone flag dialog
- September 2025
- No date parsed from source.
- Detected by Releasebot:Sep 27, 2025
Launched: Modern Documentation
LaunchDarkly unveils "Launched: Modern Documentation," a new docs site and modern documentation approach led by Sarah Day. The excerpt signals a launch announcement but provides no date, version, or detailed release notes.
Launched: Modern Documentation
The page content includes a heading "Launched: Modern Documentation" indicating a product update or launch related to modern documentation by LaunchDarkly. There is no explicit release date or version number mentioned in the visible content. The main textual content related to the release is not detailed beyond the title and the context that it is a launch announcement by Sarah Day, Technical Writing Manager at LaunchDarkly, who is featured on the page. The page also references other content by Sarah Day and links to a new docs site, but no further descriptive text about the release itself is provided in the extracted content.
- September 2025
- No date parsed from source.
- Detected by Releasebot:Sep 27, 2025
LaunchDarkly + Snowflake: Introducing Warehouse Native Experimentation and Product Analytics
LaunchDarkly introduces Warehouse Native Experimentation, enabling experiments to run and be measured inside Snowflake AI Data Cloud using trusted data, with a Snowflake Marketplace app and easy setup.
New: use Snowflake AI Data Cloud data to measure the impact of LaunchDarkly experiments.
We’re excited to announce a new chapter in our collaboration with Snowflake: the introduction of Warehouse Native Experimentation. Now teams can not only unify feature management and experimentation with LaunchDarkly; they can also leverage data within their Snowflake AI Data Cloud to measure the impact of their experiments using trusted datasets. By designing, running, and analyzing experiments using warehouse data, teams can unlock deeper insights and make critical decisions more quickly.
→ Want to see how this works in practice? Watch a replay of our webinar with Snowflake
LaunchDarkly and Snowflake are collaborating to remove barriers to experimentation
We recently enhanced our Snowflake Data Export, making it easier for teams to export and analyze experiment data directly within Snowflake. Now, we’re further expanding our collaboration with Snowflake, unlocking warehouse native experimentation to help teams get more value from their data.
One of the most significant challenges engineering, product, and data teams face with experimentation is the disconnect between their trusted business data—the data their organization relies on to make decisions—and the tools they use to run experiments. This disparity makes it difficult to extract meaningful insights, accurately measure experiment impact, and confidently iterate on product features.
Now, with Snowflake Warehouse Native Experimentation, you can run experiments in LaunchDarkly and make decisions based on enhanced results powered by Snowflake’s AI Data Cloud. Using trusted, organization-wide metric data from Snowflake, you can gain a more holistic view of experiment results, ensuring they align with core business data.
Simply run experiments in LaunchDarkly. The LaunchDarkly platform will then analyze the experiment results on top of your Snowflake data. Your Snowflake data never leaves your warehouse—operations run on your data directly, ensuring privacy and security are built in. This minimizes data movement, enables teams to experiment confidently, and empowers faster, more informed decisions.
Setting up Warehouse Native Experimentation with LaunchDarkly and Snowflake
Getting started with Warehouse Native Experimentation is easy; it takes just a few steps to integrate LaunchDarkly and Snowflake.
- Set up Snowflake Data Export. Send LaunchDarkly experiment data to Snowflake, where it’s combined with your metric data to compute experiment results. Follow the instructions in the LaunchDarkly documentation to complete the setup.
- Install the LaunchDarkly Warehouse Native Experimentation App. Available in the Snowflake Marketplace, this app securely connects Snowflake’s AI Data Cloud with the LaunchDarkly experimentation platform, ensuring experiments can be analyzed in the same environment as your business metrics.
After completing your setup, you can define metrics using Snowflake data and add them to experiments in LaunchDarkly. This makes it easy to measure experiment results against important KPIs and unlock deeper insights for decision-making.
Get started today
Ready to take your experimentation to the next level? Warehouse Native Experimentation is now available, making it easier than ever to analyze experiments using trusted data within Snowflake.
Start making data-driven experimentation decisions with LaunchDarkly. Sign up for a product demo today.
- September 2025
- No date parsed from source.
- Detected by Releasebot:Sep 27, 2025
Introducing Migration Assistant: Migrate and Modernize Without The Pain
Introducing Migration Assistant, a new set of features to streamline migrations: a migration flag type with 2/4/6 stages, cohort-based rollout, native metrics and consistency checks, and guardrails to safely manage staged migrations and reduce risk.
Migration and modernization initiatives are mission-critical for software development organizations. But whether you’re moving infrastructure to the cloud, migrating to a new data architecture, or upgrading to a new version of software, migrations are high-risk. The consequences of a failed migration can negatively impact everything from customer trust to your company’s stock price, all of which hurts the bottom line. That means that modernization is a high-stakes endeavor.
For engineering teams, migration and modernization projects are often complex and stressful. Breaking them down into manageable pieces requires a lot of complicated orchestration and planning. Software teams spend months (or more) planning a single migration and building custom tooling to help them manage the complexity of the project.
Moreover, effectively measuring and monitoring migrations remains a significant challenge. There’s no easy way to measure consistency between old and new systems or safely spot infrequent discrepancies.
That’s why we’re excited to share that we have built new capabilities to help solve the formidable challenges associated with technology migrations.
Introducing Migration Assistant
We’re proud to introduce Migration Assistant, a set of capabilities to help you successfully manage migrations and make them shorter by building higher confidence at every stage. With Migration Assistant, you’re now able to:
- Drive migrations with a new migration flag type purpose-built to minimize migration effort and lower risk
- Use controlled cohort progression to allow incrementally moving traffic allocation
- Use consistency checks to maintain confidence in the results between old and new datastores
- Track important migration metrics like latency and error rate that help inform your next move
The migration flag is a new type of feature flag, specifically built for migrations, with out-of-the box support for common migration paths of 2, 4, or 6 stages to provide better predictability for each of your migrations.
When you create a new feature flag, you can select the new migration flag type. Doing so will prompt you to select the type of migration you’re planning to execute (2-,4-, or 6-stage). Depending on the type of migration you select, the stages of your migration will populate and become associated with the newly created flag.
CohortsWith migration flags, you will now be able to target specific cohorts that make up a slice of the total audience for a particular migration that can independently move through stages of a migration.
You can create new cohorts to target (for example) just your internal users to initially test and validate the migration, or create a hold-out cohort for your most risk-averse customers to migrate them last, after you have built sufficient confidence in the migration with previous cohorts. For each cohort, you’re not limited to just moving the entire cohort from one stage to the next—you can also progressively move the cohort by allocating just a percentage of the cohort to the next stage.
Metrics and consistency checksThere is native support for metrics collection and comparing results to measure consistency in our SDKs, so you'll be able to detect issues with a migration faster and without custom instrumentation.
You can now see error, latency and consistency metrics broken down by cohorts, as well as aggregated for the entire migration, so you can quickly understand the overall health of your migration. And if anything goes wrong, you can react and instantly move one or more cohorts safely back to a previous stage in the migration, all without waiting for lengthy CI/CD pipelines to deploy code changes.
GuardrailsMigration Assistant includes guardrails that will warn you when an action could jeopardize your migration. This gives teams confidence that at each stage they are making safe changes to their migration flag and not accidentally moving a cohort to a stage before they're ready.
Our platform will warn you when actions are taken that could jeopardize your migration project. For example, you will be alerted when an existing cohort skips a phase in the migration.
To get started, refer to our getting-started guide & documentation and create a migration flag in LaunchDarkly.
- September 2025
- No date parsed from source.
- Detected by Releasebot:Sep 27, 2025
Introducing A New Way To Quickly and Easily Do Progressive Rollouts In LaunchDarkly
LaunchDarkly unveils Progressive Rollouts to gradually expose new flag variations by percentage over time, reducing release risk with staged exposure. Start small and ramp up via configurable schedules, now available on all plans.
Bugs are an unfortunate fact of life for any software team. But as our CEO Dan Rogers put it in a blog post about the recent global software outage, while ‘bugs are inevitable, the disruptions they can cause don’t have to be’. One strategy to help mitigate the risks associated with releasing software to your users is with Progressive Rollouts to incrementally expose any new feature to segments of your user base.
While you can already configure Progressive Rollouts using workflows in LaunchDarkly today, we want to make it even easier for your team to de-risk your software releases. That's why we're excited to introduce a new way to add Progressive Rollouts to your releases.
When Should You Use Progressive Rollouts?
Use Progressive Rollouts if you want to randomly allocate traffic by context kind, and automatically increase the amount of traffic to a specific flag variation over time. Instead of releasing new features to all users at the same time, you start with a small segment—say, 1% - then increase to 5%, and 10%, and so on, in order to limit the amount of users who are affected by any given issue with the release. This phased approach helps contain any potential disruptions, ensuring that small updates don’t become big problems. Progressive Rollouts are a valuable tool in safeguarding release processes and ensuring that any issue that gets shipped only affects a subset of your user base at most.
How Does It Work?
Progressive Rollouts are now an option on a flag’s targeting rule in a given environment. A Progressive Rollout will serve a given flag variation to a specified percentage of contexts, and gradually increase the percentage exposure of a flag to your users over a specified period.
Let’s say you’re on an engineering team that’s launching a new checkout workflow for an ecommerce site. With Progressive Rollouts, you can easily update the flag targeting rule to define a rollout schedule that incrementally increases user exposure over (for example) 20 hours. You can start by introducing the new flag variation to just 1% of your users (or as few as .01%), as well as the time you want the flag variation to be served to that percentage of users. You can also set additional percentage and time increments, so your rollout schedule could look something like:
- 1% of users for 4 hours
- 5% of users for 4 hours
- 10% of users for 4 hours
- 25% of users for 4 hours
- 50% of users for 2 hours
- 75% of users for 2 hours
- 100% of users (rollout complete)
Getting Started with Progressive Rollouts
Getting started is easy, and Progressive Rollouts are available for users on every LaunchDarkly plan - check out the docs page here.
- Aug 20, 2025
- Parsed from source:Aug 20, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Context counts chart, AI Configs agents, tools, trends, and more...
LaunchDarkly rolls out Context Counts for viewing unique contexts, AI Configs updates (agent-based workflows, tools, trends explorer), Data Export to Databricks/BigQuery, Jira integration, observability enhancements, SDK updates, and assorted improvements plus bug fixes.
Context counts chart
If you’ve ever wondered "how many users have seen my new feature?" you can now use the new Context Counts chart to answer that question! On the Monitoring tab of any flag that has client-side evaluations, in addition to the Evaluations chart, you can now also select a context kind to view the number of unique contexts of that context kind that encountered the flag. For now, this is only supported for flags with client-side evaluations. Learn more in the LaunchDarkly docs.
AI Configs: Agents, Tools, Trends
Shipping AI-powered features and agent-based workflows is the new normal. It’s no longer a question of whether you ship them, but how you do it safely, consistently, and with control. We have just introduced three powerful updates to AI Configs:, agent-based workflows, tools, and trends explorer for AI Insights.
Agent-based workflowsNow, with AI Configs, teams can define agent behaviors, attach reusable tools from a shared library, and use new SDK methods to spin up intelligent, multi-step agents, while still benefiting from the safe rollout and approval flows you expect from LaunchDarkly.
ToolsFor agent-based workflows to be effective you also need effective tools. These can be used with both our Agent and Completion modes, providing ultimate flexibility in how LLM workflows are set up. Tools allow LLMs to do things like retrieve extra data or take an action on a users' behalf. Tools can now be managed in the UI using either a visual or JSON editor to define the configuration, are versioned on update so you have a change-log of any edits made to your tools, and can be attached to either Completion or Agent configs.
Trends Explorer for AI insightsAs your team ships more AI products, it becomes harder to measure the real impact of model behavior, performance, and cost across environments and teams. With the new Trends Explorer, you can visualize trends across your AI Configs. Quickly visualize which models cost the most, have the greatest latency, or are performing the best (or worst).
Get started with AI Configs today. AI Configs Docs | Start your free trial
Data Export for BigQuery and Databricks
Data Export allows users to export their flag evaluation events, experiment / flag metadata, and metric events to their own data warehouse. We know that Enterprise customers with an existing data warehouse strategy often want to overlay their existing product and business reports with flag data to answer more detailed questions about their users and products. Now, customers using Databricks and BigQuery who are interested in flag data have Data Export support. See our documentation for setup instructions: Databricks Data Export | BigQuery Data Export
Other improvements- Released LaunchDarkly for Jira (EU) - find it on the Atlassian Marketplace
- Made default contexts available for guarded rollouts and experimentation
- Made client-side flag evaluations visible directly in the session player timeline
- Added SDK support for Node.js, Python, and React Native in LaunchDarkly Observability
- Enhanced Observability Trace Viewer to provide full context in a single view
- Enhanced data visualization for metric details when chart is hovered
- Improved accuracy of cost calculation logic for AI Configs
- Improved variation value filtering in events API/service layers
- Fixed tools cache and versioning issues where versions weren't propagating to UI
- Fixed AI monitoring cost logic and updated endpoint to return descriptive value
- Fixed non-deterministic frontend error when switching projects
- Fixed audit log environment selector modal closing issue in Firefox
- Fixed AI library page to properly redirect to waitlist when AI configs are disabled
- Fixed incorrect navigation links from multi-armed bandit dashboard cells that were pointing to experiments
- Removed standard randomization unit from API responses while maintaining backward compatibility
- Aug 7, 2025
- Parsed from source:Aug 7, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Guardrail metrics for guarded rollouts, Experimentation enhancements, and more...
Guardrail metrics now auto-apply to guarded rollouts for consistent default metrics, with customization options. Enhanced experimentation: A/A tests, new experiments dashboard, and optional experiment notifications. Additional improvements span analytics, session replay, performance, and bug fixes to streamline experimentation and product analytics workflows.
Guardrail metrics for guarded rollouts
You can now define guardrail metrics that will be added to any guarded rollout by default. This ensures each release starts with a consistent set of trusted metrics. Teams can still customize their configuration, but the default behavior encourages alignment across teams.
Experimentation Enhancements
Our Experimentation team has been hard at work developing features that make it easier for experimenters to start, run, and monitor experiments, and we’ve just launched a few of our frequently requested customer features!
A/A TestsLaunchDarkly now makes it simple to run A/A tests—an industry standard practice to validate an experiment platform and your own test setups—so you can have full confidence that any differences observed in your A/B/n tests are due to the changes being tested, not errors in the experiment setup.
New experiment list dashboardFor teams with healthy experimentation programs, they often have many experiments across their features, products, and teams. The Experiments dashboard now surfaces more information about the status of experiments across your program so you and your team can more easily track your progress and outcomes as you coordinate shipping new value for your customers. Coming soon, we’ll provide additional sorting and tagging functionality for greater controls.
Experiment NotificationsYou can now receive notifications for when experiments start or end. The launch or conclusion of experiments your team is working on can be exciting moments, and many contributors and stakeholders wish to keep up with what’s happening with their experiments of interest. Now, it’s easier than ever to keep up with experiment statuses just by “following” an experiment to receive email or slack notifications.
Other improvements- If you use a mobile or client-side SDK to evaluate a flag without the right SDK availability set, we’ll notify you on the flag page and prompt you to fix it
- Added a new KPI view type and the ability to plot metrics directly to the Trends chart in product analytics
- Product analytics charts now auto-update when making configuration changes
- The event and metrics selection menu in product analytics has been redesigned to make chart setup faster, easier, and more intuitive for users
- Improved performance and buffer in the session replay playback
- Reduced initial bundle sizes to improve performance of the web app
- Improvements for displaying and searching flag evaluations in session replay
- More accurate p-value threshold in experimentation
- Layout and copy improvements to archive flag flow
- Fixed iteration analysis queries being incorrectly disabled for draft experiments
- Fixed context deletion button issues
- Fixed layout and UX issues when managing flag templates
- Fixed rendering issues on retention chart
- Fixed issue with incorrect SDK keys in code snippets
- Fixed regression with text contrast on small buttons
- Fixed display of archive experiment modal header for long experiment names
- Fixed missing overflow menu icon on Metrics page
- Jul 24, 2025
- Parsed from source:Jul 24, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Guarded rollouts for AI Configs, Observability SDKs, Trace Viewer, and more...
New Guarded Rollouts for AI Configs lets you automatically rollback on production metric spikes, with autogenerated metrics from the LaunchDarkly AI SDK. New server-side observability SDKs (Node.js, Python, React Native) simplify traces and logs, plus a brand-new Trace Viewer for quick context across spans and logs. Also includes multiple improvements (multi-armed bandit priors, SDK install flow,,
We introduced Guarded Rollouts for AI Configs, making it easier and safer to rollout changes to your AI-powered features in production.
LLMs are powerful, but unpredictable. A small prompt tweak or model swap can work great in staging but degrade user experience in production. That’s where Guarded Rollouts come in. You can now set up guardrails to monitor important metrics - like increases in completion errors - and automatically rollback to a previous variation if that error count spikes in production.
Bonus: No manual instrumentation needed! When you integrate with the LaunchDarkly AI SDK, we automatically generate metrics, including:
- Positive/negative feedback
- Tokens in/out
- Latency
- Completion error/success rates
Learn more in our AI Configs docs and Autogenerated Metrics guide
Catching AI Feature Issues with Guarded Rollouts 🚀 - Watch Video
New Server-side SDKs for Observability
We added new SDK support for Node.js, Python, and React Native in LaunchDarkly Observability, making it even easier to capture traces, logs, and sessions across your full stack.
To learn more, check out the docs:
- Node.js SDK
- Python SDK
- React Native SDK
A brand new Trace Viewer
Our new Trace Viewer, designed to give you full context in a single view, has arrived! Open multiple spans, search across traces, and instantly jump to related logs or session replays.
See our Trace Viewer in action 🚀 - Watch Video
Other improvements
- Added support for uninformed priors to Multi-Armed Bandit
- Added base SDK to install flow for Python SDK
- Added close button for Check SDK
- Addressed a wide set of rule violations for Lint Fixes
- User-selected views are now saved per project, similar to environment preferences
- Rollouts can now be stopped directly from the rule card
- Variation approval reasons are now logged as part of AI-driven changes
- Improved metric group layout and error boundary handling
Bug fixes
- Flag Deletion Notifications now correctly reflect archived and active flags
- Fixed incorrect "from" values during multi-environment edits
- Restored always-roll-back-measured-rollouts-on-srm flag following user feedback
- Resolved 503 errors and webhook issues in multi-environment scenarios for custom approvals
- Corrected analytics percentages rounding logic in breakdown displays
- Fixed regression list ordering inconsistencies
- Fixed experimentation layers filter functionality
- Fixed per-page logic in session and error tables
- Jul 9, 2025
- Parsed from source:Jul 9, 2025
- Detected by Releasebot:Sep 27, 2025
Changelog: Multi-armed bandits, Launch Insights project comparison, and more
MABs available to all Experimentation customers with adaptive testing integrated into the revamped Experiment Builder and results screens. Launch Insights now supports side‑by‑side comparison of up to four projects, improved filters, date picker, and clearer charts. Plus numerous UX, metric, and bug‑fix improvements across the platform.
Multi-Armed Bandits (MABs) are now available to all Experimentation customers! This adaptive testing method automatically reassigns your audience toward better-performing variations as results emerge, helping your teams learn faster, reduce opportunity costs, and get the best experiences into their customers’ hands sooner.
MABs work seamlessly with our redesigned Experiment Builder, updated results screens, and our latest metrics enhancements. It’s a major milestone in our journey to deliver powerful, flexible experimentation at scale.
To create your own Multi-armed bandit, just follow these steps.
Project comparison for Launch Insights
Launch Insights just got a major upgrade: Teams can now compare performance across up to four projects side-by-side. This feature helps organizations understand trends across teams, track adoption, and identify outliers more easily.
In addition to comparison functionality, we’ve introduced improved filters, a more intuitive date picker, and a refreshed chart design that boosts clarity while reducing noise. We’ve also adjusted how we display scores in comparative views to better reflect context, especially for newer accounts. To learn more, read Launch Insights.
Other improvements
- Lots of improvements to the new individual context targeting experience based on customer feedback
- Added stage duration and rollout percentage details for approval requests
- Improved metric regression display by sorting results chronologically
- Added frontend validation for maximum 10 phases in release pipelines
- Redesigned the welcome experience to guide new users more efficiently with role-based onboarding
- Various experimentation improvements, including updated tooltips for Bayesian experiments, funnel experiments only requiring a primary metric, and more
- Users can now filter error distributions by date
- Dashboard key management now includes visible keys, real-time duplicate detection, and better error messages
- A new topic in LaunchDarkly Docs now outlines tips and best practices for setting up your context kinds to run effective guarded releases
Bug fixes
- Fixed percentile metrics chart display inconsistencies where chart values didn't match tooltip values
- Resolved event selector disabled state issues in composite event details views
- Fixed layout problems on various list view screens throughout the application
- Corrected funnel step tooltips that were incorrectly showing decimal values
- Fixed trial banner display logic to properly handle different account states and environment