ABSmartly Release Notes
Last updated: Jan 15, 2026
- Dec 1, 2025
- Date parsed from source:Dec 1, 2025
- First seen by Releasebot:Jan 15, 2026
December 2025
Big metrics governance update unlocks new metric categories, richer metadata, and a redesigned metric view to help you find and manage the right metrics. It also introduces metric versioning, usage insights, and smarter selection in experiments for stronger governance.
Overview
This release is all about Metrics. As part of our broader initiative to improve metric governance, we’ve introduced powerful new capabilities to help you better manage, understand, and select the right metrics for your experiments.
General improvements
We've made some general improvements to Metrics that you will see across the platform.
New Metric Categories type
We've added a new configuration type that helps categorise and group metrics. Those new metric categories will make it easier to find the right metrics when creating an experiment.
While the categories should reflect your own needs, here is a list of possible metric categories you can add to your ABsmartly:
- Conversion: Measures whether users complete a desired action.
- Revenue: Captures direct monetary impact.
- Engagement: Reflects how actively users interact with the product.
- Retention: Shows whether users come back or continue using the product over time.
- Performance: Measures speed and responsiveness, such as load time or latency.
- Reliability: Tracks stability and correctness, including errors, failures, or availability.
- Quality: Represents outcome quality or user experience signals like cancellations, refunds, or unsuccessful outcomes.
New metric's metadata fields
We've added new metadata fields to metrics that help with discoverability and filtering across the platform. This includes:
- Unit type: This is the list of Unit type(s) for which this metric is computed. Setting the correct Unit type(s) will help experimenters choose the right metric for their experiments. (e.g. user_id, device_id)
- Application: This is the list of Application(s) where this metric makes sense. For example, an app_crashes metric only makes sense for experiments running on app platforms.
- Metric category: This is the category the metric belongs to. This will make your metric more discoverable. See above.
All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.
Metric View page
You can now click on the name of any metric across the platform to open the metric's view page. This page will give you a readable overview of the metric and will be the new entry point for managing metrics (editing and creating new versions) as well as many new upcoming features.
Improved Metric Discoverability
We’ve made it easier to find, understand, and select the right metrics when creating your experiments/templates/features.
Usability improvement
We totally redesigned the metric selection step of the experiment setup. The goal of the new UI is to make it easier to find and add the right metrics for your experiments.
Smarter metric selection in experiments
The metric selection step will show by default the most relevant metrics based on the chosen unit type and application (make sure to update your metric metadata to get the most out of this new feature).
Metrics can now also easily be searched by name, tags, owners, etc so you don't have to scroll through your long list of existing metrics to find what you are looking for.Usage insights
While adding metrics to your experiments/templates/features, you can now see how often a metric has been used in past experiments to help you assess its relevance and importance.
TIP
To get the most out of these improvements, we recommend reviewing your existing metrics, filling in missing metadata, and adding clear descriptions where needed.
Metric Versioning (Foundations)
A key part of metric governance is version control, ensuring that metric definitions are transparent, traceable, and stable over time. This release lays the groundwork for more robust version management in the future.
Metric versioning is a critical part of metric governance as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.Metric versioning 1.0
It is now possible for metric owners to create a new version of an existing metric. This can be done, for example, when the definition of a metric change.
- Creating a new version of a metric will not impact past and running experiments/features which are using a previous version of that metric.
- Only the latest version of a metric will be discoverable and can be added to new experiments. Experimenters will only be able to see the latest version of each metric.
- Experiments/Features cannot be started when they use an outdated version of a metric. Experimenters will be asked to update to the latest version before they can start the experiment/feature.
Edit vs New Version
With the launch of metric versioning, some fields can be edited in the current version of the metric while others will require a new version to be created.
- Editable fields: Fields like Description, Tags, Category, Applications, Tracking units can safely be updated without changing the definition of a metric.
- Non-editable fields: All other fields which might have an impact on how the metric is computed or how the result might be interpreted cannot be edited and a new version of the metric will need to be created to be able to change them.
As a metric owner, you will be able to edit and create new version from the new Metric view page.
CAUTION
If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
A new end-point for creating new metric versions is now available if needed.What’s Next
We’re continuing our focus on general metric improvements and metric governance in the coming sprints. Upcoming improvements include:
- CUPED support
- Metric lifecycle
- Metric approval workflows
- Metric usage overviews and reporting
These updates are part of our broader effort to improve trust, transparency, and governance around metrics.
Questions or Feedback?
As always, if you have questions about this release or want to talk about how to get more out of your metrics, reach out to us anytime.
Original source - Nov 1, 2025
- Date parsed from source:Nov 1, 2025
- First seen by Releasebot:Jan 15, 2026
November 2025
ABsmartly unveils a LaunchPad Chrome Extension in Beta to simplify building experiments with a visual editor, plus a new Ownership & Permissions model for sharing assets. This release also tightens experiment search and teases data retention rules.
Overview
This new release is packed with new features and improvements to help you manage your experimentation program and make it easier to create simple experiments without the need for developers
The ABsmartly Chrome Extension - The ABsmartly LaunchPad Beta
We made it easier to create simple experiments using our new ABsmartly LaunchPad. This Chrome Extension makes it possible to create experiments using our new Visual Editor and without the need for developers. While the Chrome Extension is still in Beta, we encourage you to give it a try and give us feedback.
Before you get started see our guides on getting started with the LaunchPad and creating your first experiment.
This first release is only the first step as we have big plans for the ABsmartly LaunchPad in 2026.Ownership & Permissions model - Sharing of Assets
In this release, we make it possible to share experiments, features, templates, goals and metrics with users and teams. This feature makes it easier for teams to collaborate and use assets without having to share ownership of that asset.
This was the last building block in our Ownership & Permission model. Some changes will be necessary on your side before you can fully make use of this new model. We encourage you to read our how-to guide and to reach out to us if necessary so we can jump on a call and guide you through those changes.
We want to make things better and future-proof for you. We realise that we introduced a lot of changes but unless you take action on all the action items described above then nothing should change (except for the Team Ownership which is now the default but which can be disabled in your settings) and you will be able to keep using the platform like you use it today.
When you are ready to start making use of the new functionalities, let us know and if you need help for example with some of the steps described above.Improved experiment search list
We've improved the filters on the experiments list to make it easier to find what you are looking for and to align with how experiments are filtered on the Velocity Report. Here is a list of the changes we made to the search filters & search list:
- Added a new Type filter so you can differentiate between experiment and full on instances. You can now easily find running or stopped experiment without seeing the full on instances.
- Added Completed, Not Completed and Aborted to the State filter to align with how experiments are filtered on the velocity reports.
- Removed Running - Not full on from the State filter, as you can now find running experiments by selecting experiment in the Type filter.
- Renamed the Significance filter to Result and changed it values to Insignificant, Negative, Positive to avoid confusion about the states and statistical significance of the experiments
- Fixed the issue where reporting on the impact of GST experiments was not correct on the experiment list.
- Fixed an issue so the color of the impact of inconclusive experiments is gray as on the experiment's overview page.
Data Retention Update
As part of our ongoing work on performance and cost optimisation we are planning to introduce the following data retention rules in a future update :
- 1 year retention for goal, exposure and user attribute data. While experiment results will always remain available and visible, this change means that you won't be able to explore metrics and to slice and dice data for experiments which are stopped for 1 year or more.
Do you think this will impact you? If yes, get in touch so we understand your need for this data.
Questions or Feedback?
As usual, please let us know what you think, and if you have any questions, please get in touch.
Original source All of your release notes in one feed
Join Releasebot and get updates from ABsmartly and hundreds of other software products.
- Oct 1, 2025
- Date parsed from source:Oct 1, 2025
- First seen by Releasebot:Oct 24, 2025
October 2025
The release introduces a team-level ownership model with roles like Team Viewer, Contributor, Admin, and Base User, shaping access to experiments and assets. It also adds fixes and usability tweaks, plus an exposure ignore tag and export fix. A fair-usage payload cap is announced for the next update.
The main focus of this release is the introduction of our new ownership model, specifically team-level roles and permissions. We’ve also included a number of small improvements and bug fixes to improve usability and platform flexibility.
Team-Level Roles and Permissions
You can now assign users to teams and grant them team-level roles — such as Team contributor, Team viewer, or Team admin — that define how they interact with experiments, metrics, goals, templates, and features owned by that team.
This release is an intermediate step toward the full rollout of our new ownership model. Unless you explicitly add users to teams and assign them roles, there should be no changes in behavior and everything will continue working as before.
If you encounter any issues or inconsistencies related to roles and permissions, please contact us.
What's New
We have introduced four new built-in roles:
- Team Viewer
Can view all experiments, features, templates, metrics, and goals owned by the team. - Team Contributor
Can do everything a Team Viewer can, plus create/edit experiments, features, templates, metrics, and goals within the team and manage the life cycle (start, stop) of those experiments and features. - Team Admin
Can do everything a Team Contributor can, and also manage the team — including adding/removing members, assigning roles, editing metadata, etc. - Base User
This will replace the current default "User" role. Following the principle of least privilege, this new role provides read-only access to most components, but does not include access to experiments, features, templates, metrics, or goals — which will be managed at the team level.
These roles are immutable, but custom roles (e.g. "Metric Owner", "Experiment Reviewer") will be supported in future releases to meet your unique needs.
All existing global roles remain valid and can be assigned at the Global Team level, which represents your organisation. Team role permissions are inherited: roles granted at any level apply to all child teams.
If you haven't started yet, you can begin managing your team hierarchy at: /settings/teams
If you have any questions or want to learn how to get the most from this model, feel free to reach out!
Other Notable Improvements and Fixes
- Ignore exposure events
You can now tag exposure events with a __ignore attribute to have them excluded from experiment analysis. This is useful for excluding internal users from your experiments. - Experiment export fix
Fixed an issue where re-exporting experiment data for an existing export request would break the export process. - Metrics list UI cleanup
Improved the layout of the metrics list page to avoid overly long description text dominating the screen.
Upcoming Change: Fair Usage Policy Update
In our next release, we plan to introduce a 2KB limit on the JSON payload sent with goal and exposure events. This change is needed to keep a better control over cost and performance of the ABsmartly platform. Events whose payload is above the threshold will not be processed by the ABsmartly platform. While this limit is way over what most of our customers are currently using, we will query past events and reach out to you separately if we see that you have used larger payloads in the past so we can better prepare for this upcoming change.
If you have any concerns or use cases that require larger payloads, please let us know as soon as possible.
Questions or Feedback?
As always, if you have questions about this release or anything else, don't hesitate to get in touch.
Original source - Sep 1, 2025
- Date parsed from source:Sep 1, 2025
- First seen by Releasebot:Sep 27, 2025
September 2025
Summary: Focused performance and stability update with small product changes. We now exclude data older than 30 days by default, slow-exp updates reduced, and fix: table sorting, GST data switch, activity tab caching, and adding favourites.
This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.
Performance Improvements
To improve platform performance and reduce unnecessary load, we have made the following changes:
- Feature flag: We no longer fetch data older than 30 days by default
- Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days
These changes help improve performance without impacting the accuracy or freshness of recent results.
Notable Bug Fixes
- Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
- Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
- Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
- Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.
Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!
Original source - Sep 1, 2025
- Date parsed from source:Sep 1, 2025
- First seen by Releasebot:Sep 27, 2025
September 2025
This release focuses on performance tweaks and bug fixes to improve usability while teams work on bigger backend updates. Key changes include skipping data older than 30 days by default, slowing updates for long-running experiments, and fixes to table sorting, GST data visibility, activity caching, and adding experiments to favourites.
This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.
Performance Improvements
To improve platform performance and reduce unnecessary load, we have made the following changes:
- Feature flag: We no longer fetch data older than 30 days by default
- Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days
These changes help improve performance without impacting the accuracy or freshness of recent results.
Notable Bug Fixes
- Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
- Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
- Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
- Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.
Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!
Original source - Sep 1, 2025
- Date parsed from source:Sep 1, 2025
- First seen by Releasebot:Sep 27, 2025
September 2025
This release centers on performance improvements and bug fixes. Default data fetch skips data older than 30 days and long-running experiments update slower, boosting speed without harming recent results. Notable fixes: table sorting, GST data clarity, activity tab caching, and favourites.
This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.
Performance Improvements
To improve platform performance and reduce unnecessary load, we have made the following changes:
- Feature flag: We no longer fetch data older than 30 days by default
- Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days
These changes help improve performance without impacting the accuracy or freshness of recent results.
Notable Bug Fixes
- Table sorting restored: Sorting on list tables has been re‑enabled after being unintentionally removed in a previous release
- Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
- Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
- Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.
Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!
Original source - Sep 1, 2025
- Date parsed from source:Sep 1, 2025
- First seen by Releasebot:Sep 27, 2025
September 2025
This release brings targeted performance tweaks and reliability fixes aimed at clarity and usability. Key changes include default data fetch 30-day cutoff, reduced update frequency for long-running experiments, restored table sorting, clarified GST data handling during interim analyses, activity tab caching fixes, and reliable add-to-favorites functionality.
This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.
Performance Improvements
To improve platform performance and reduce unnecessary load, we have made the following changes:
- Feature flag: We no longer fetch data older than 30 days by default
- Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days
These changes help improve performance without impacting the accuracy or freshness of recent results.
Notable Bug Fixes
- Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
- Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
- Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
- Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.
Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!
Original source - Aug 1, 2025
- Date parsed from source:Aug 1, 2025
- First seen by Releasebot:Sep 27, 2025
August 2025
This release enhances ownership management with flexible owners and inclusive filters, auto refresh for estimates, improved graph zoom behavior, and GST estimate accuracy fixes to boost experimentation reliability and collaboration.
This release brings several UX improvements and bug fixes aimed at making experimentation and collaboration smoother and more reliable. We’ve made owner management more flexible, improved graph interactions, and fixed key issues around estimations and data tables.
Improved Owner Filters and Flexibility
- An owner can now be a single user or a team.
- The Owner filter now accepts both users and teams when filtering, making results more inclusive.
Bug Fixes and Usability Improvements
- MDE and Max Runtime Automatic Estimation Refresh: Estimates now update automatically when the standard deviation changes. We have also improved estimation precision.
- Graph Zoom Behavior: Y-axis now updates correctly when zooming in, giving you a clearer view of trends.
- GST Estimate Accuracy: The next GST analysis estimate now correctly respects the minimum time between analyses.
Please let us know if you have any questions or feedback — we're always happy to hear from you.
Original source - Jul 1, 2025
- Date parsed from source:Jul 1, 2025
- First seen by Releasebot:Sep 27, 2025
July 2025
A new built-in notification system keeps teams aligned with alerts on team changes, experiment lifecycles, and feature toggles, plus a fresh Impact per Day metric to gauge real-world value. The Metrics Overview tab exits beta with a cleaner layout, consolidated results, and more actionable insights for faster, informed decisions.
This release introduces a brand-new notification system to help teams stay on top of what matters, along with the official launch of the improved Metrics Overview tab. These updates are designed to make your workflow more informed, timely, and collaborative.
New Feature: Built-In Notifications
Never miss a key event again. Our new built-in notification system ensures you're always up to date with what's happening on the platform.
- Team assignments: Get notified when you're added to or removed from a team
- Experiment/feature lifecycle updates: Get alerts when an experiment starts, stops, is put Full On, or when a feature is turned on/off.
More alert types will be added in upcoming releases — stay tuned!
New Feature: Impact per Day Estimates
We've added new Impact per Day estimates to all relevant metric types in the experiment overview to help you better understand the real-world significance of your experiment results. This gives you a clearer picture of the daily impact your changes could have when rolled out, making it easier to evaluate their practical value.
Metrics Overview Tab Out of Beta
Thanks to all your feedback, the new Metrics Overview tab is now officially out of beta.
- Cleaner layout and better tooltips and timestamps
- Consolidation of GST and Fixed Horizon experiment results in a single table
- Display Impact per Day for all relevant metrics
- More actionable information at a glance
These changes are designed to help you make faster, more informed decisions when reviewing experiment results.
Please let us know if you have any questions or would like to discuss future features with us.
Original source - Jun 1, 2025
- Date parsed from source:Jun 1, 2025
- First seen by Releasebot:Sep 27, 2025
June 2025
The latest release adds Team Management to map your org, a one-click export for experiment data, and improved Metrics/Goals lists for easier discovery. It also includes broader stability fixes and performance tweaks to streamline experimentation at scale.
Our latest release introduces new features to support better collaboration, data access, and metric/goal discoverability. With foundational Team Management features and a faster way to download experiment data, this update sets the stage for a more streamlined and scalable experimentation experience.
New Feature: Team Management (Foundations)
Teams are now much more than just metadata. You can properly define and manage teams within the platform. This update allows you to map your internal organisational structure directly into the product by:
- Creating and managing teams
- Assigning users to teams
- Structuring teams hierarchically (e.g. parent and child teams)
This is the first step toward upcoming improvements around ownership, permissions, and team-level collaboration. Go to Settings > Teams to get started with managing your team and inviting your team members. If you already have teams defined in ABsmartly, you can simply move them to the right place in your org structure. Check our wiki page for more information on creating and managing teams in ABsmartly.
New Feature: One-Click Experiment Data Export
We added an Export data button to the experiment overview page, allowing you to export participant exposures and goal event data for that experiment with a single click.
- No need to navigate through the events page or APIs
- Directly download all experiment-related data with a single click
- Makes it easier for teams to access raw data for custom analysis or deep dives
From the experiment overview page:
- Click on ... and select Export data.
- You will get a confirmation that the request is in progress. The process might take a while depending on the size of the experiment.
- You will get a notification on the experiment activity page once the export file is ready.
- The export file will be available for 30 days once it has been created.
Only users with the "Events:List" permission can trigger the export and download the file.
Improved Metrics & Goals List Pages
Improve searchability and filtering (archived vs. unarchived)
- Improve searchability and filtering (archived vs unarchived)
- Help you quickly identify and select the metrics and goals that matter
Other Fixes & Improvements
This release also includes several smaller enhancements and bug fixes to improve overall platform stability and performance.
Feedback and Next Steps
As always, we value your feedback, so please share with us what you would like to see improve in the future.
Original source - May 1, 2025
- Date parsed from source:May 1, 2025
- First seen by Releasebot:Sep 27, 2025
May 2025
This release adds practical backend work and several user-facing updates: you can now click charts in the Velocity Report to see underlying experiments, see a breakdown between early full-on vs early stops, and view GST data in the Experiment Dashboard metrics. A new UI toggle lets you preview the redesigned dashboard, with feedback invited. The focus remains on stability and paving the way for未来新
This release focuses on backend improvements and foundational work to support future features. While there are few visible changes for you today, these updates improve platform stability and pave the way for upcoming capabilities. We still managed to squeeze a few improvements to the Velocity report and to the Experiment Dashboard redesign.
Velocity Report
- View the list of experiments: Our most‑requested Velocity Report enhancement: you can now click any chart in the Velocity Report to view the underlying experiments.
- Early stop vs Early full-on: The Experiments not completed graph now shows the breakdown between the early full-on and early stops.
Experiment Dashboard Redesign
Our new Experiment Dashboard redesign is still in progress and available for preview. The main goal is to consolidate how Fixed Horizon and Group Sequential experiments are shown and to make it easier for everyone to make quality decisions. In this release, we added GST data to the metrics table. Please have a look and let us know what you think.
- How to see it? Within the dashboard of any existing experiment you will have a 'Load new UI' button which will enable you to toggle between the old and the new view.
- What’s next? Give us your feedback below or send it to [email protected].
Feedback and Next Steps
As always we value your feedback so please share with us what you would like to see improve in the future. We have some really exciting new features coming your way in the next few releases so stay tuned.
Original source - Apr 1, 2025
- Date parsed from source:Apr 1, 2025
- First seen by Releasebot:Sep 27, 2025
April 2025
This release enhances Velocity and Decisions Reports with quarterly aggregation, a new Aborted Reasons report, and a fix for over-reporting. Decisions Reports gain secondary/guardrail metrics and health checks, plus ongoing feature progress and roadmap teasers.
In this release, we focused mainly on improvements to the Velocity and Decisions Report. All those improvements are driven by feedback we received from you, so keep them coming. Here are some of the most important updates.
Velocity Report
- Aggregate data per quarter: To help you run your quarterly experimentation reports we made it possible to aggregate the data per quarter (previously you could only aggregate per year, month or week).
- Aborted reasons report: We introduced a new Aborted Reasons report. This report breaks down all aborted experiments (experiments stopped early) by their aborted reason which is captured when the experiment is stopped. This new report will make it easier to understand bottlenecks and current issues in your experimentation program.
- Experiment completed: We fixed an issue where we were over-reporting the number of completed experiments.
Stay tuned: One of the main feature requests we received for the Velocity Report is to be able to open the list of experiments in each report. We are hoping to introduce this feature in one of our upcoming releases.
Decisions Report
Earlier this month we launched the new Decisions Reports (Beta). In this release we are introducing a few improvements to provide better insights into each decision.
- Secondary & Guardrail metrics: Like with the primary metric, you can now see the experiment results on the secondary and guardrail metrics. This helps understand the rationale behind each decision.
- Health checks: We are now surfacing any health check violations in the reports so you can get better insights into the quality of each decision.
Feedback & Next Steps
As always we value your feedback so please share with us what you would like to see improve in the future. We have some really exciting new features coming your way in the next few releases so stay tuned.
Original source - Mar 1, 2025
- Date parsed from source:Mar 1, 2025
- First seen by Releasebot:Sep 27, 2025
March 2025
This release updates the Velocity Report with new filters, a Unique Experiments toggle, and a Not completed widget, launches a beta Decisions Report, and previews a redesigned Experiment Overview dashboard to unify Fixed Horizon and Group Sequential views.
This release focuses mainly on our reports by improving the Velocity Report and launching the new Decisions Report. Both of those reports will help you get a better overview and track your experimentation program more effectively.
Velocity Report updates
In this release we focused on improving the Velocity Report which we launched in February. Thank you for all the great feedback you sent us (keep it coming), we tried to address as many of them as possible. Here are some key updates:
- Added Teams and Tags filters: Being able to report on specific teams and filter on tags was the main feedback we received.
- Improved Filters usability: We made it easier to select multiple values in the filters.
- Unique Experiments toggle: We replaced the old 'Show iterations' toggle with a new 'Unique Experiments' toggle.
- Not Completed widget: To make it clear that it includes early Full On as well as early stops, we renamed the Aborted widget to Not completed.
Beta Feature: Decisions Report
After the Velocity Report, we are also excited to launch the second part of our reports, the Decisions Report. While the Velocity Report focuses on the experiments you and your colleagues run, the Decisions Report focuses on the choices you make as a result of those experiments. Those decisions include Full On, Keep Current, and Abort. This report also includes a Decisions Timeline, which makes it easy to quickly browse through past decisions.
- How to enable it? If you have not enabled reports yet, your platform admin can turn on the beta version of 'Experiment reports' from the platform settings page.
- What's next? We are already working on some improvements but like with the Velocity Report we are very much looking forward to getting your feedback on this first release of the Decisions Report so we can make it right for you.
Experiment Dashboard redesign
We are working on a new improved version of one of the most important parts of the product, the Experiment Dashboard page (which we renamed the Experiment Overview page), and while we did not quite manage to finish it in time for this release we want to give you a preview of what we are doing so you can give us some early feedback. The main reason behind this redesign is to consolidate how Fixed Horizon and Group Sequential experiments are shown and to make it easier for everyone to make quality decisions.
- How to see it? Within the dashboard of any existing experiment you will have a 'Load new UI' alert which will enable you to toggle between the old and the new view.
- What's next? Give us your feedback below or send it to [email protected].
Feedback & Next Steps
We want to build the best possible experimentation platform for you, so if you have any thoughts on this release or other features, please let us know.
Original source
In the next few weeks, we will start improving our Feature Flags capabilities. While we have a good idea of the problems to solve and features to deliver, we would love to hear your use cases. If you have particular needs or requirements, let us know and we can schedule time to chat. - Feb 1, 2025
- Date parsed from source:Feb 1, 2025
- First seen by Releasebot:Sep 27, 2025
February 2025
This release adds a Velocity Report (Beta) to improve visibility into experimentation speed, plus 50+ bug fixes and usability improvements, including easier navigation, clearer metric tooltips, better alerts, and a more dynamic GST graph.
This release focuses on performance improvements, bug fixes, and usability enhancements based on customer feedback and QA reports. Additionally, we're excited to introduce the beta version of our new Velocity Report, designed to help you track your experimentation program more effectively.
Beta Feature: Velocity Report
We're introducing the Velocity Report (Beta) to give teams better visibility into experimentation speed and execution trends. This report helps you understand how efficiently experiments move through different stages, providing insights to optimize your experimentation process.
- How to enable it? Your platform admin can turn on the beta version of 'Experiment reports' from the platform settings page.
- What's next? We're already working on the next iteration and would love your feedback! Is this report useful? What additional insights would you like to see on the report? Let us know so we can improve it together.
Bug Fixes & Improvements
This release includes 50+ bug fixes and small improvements reported by our customers and QA team. Here are some key updates:
- Improved Navigation: You can now Cmd+Click (Ctrl+Click) to open experiments, metrics, or any other link in a separate tab.
- Metric Usability Enhancements:
- Improved tooltips for metrics across the platform to provide clearer definitions.
- Fixed negative metrics displaying incorrect colors on the Explore tab.
- Fixed the ordering of metrics on the overview page for a more consistent experience.
- Experiment Insights:
- Enhanced recommended action alerts when experiments are completed, making it easier to evaluate results.
- Added a decision snapshot label on stopped and completed experiments so you know which dataset you're looking at.
- Made the GST graph more dynamic, improving usability when the observed effect is large.
- Experiment creation:
- Improved fetching of max participant numbers for better accuracy.
- Introduced data sampling in metric performance graphs for improved platform performance.
These updates enhance platform stability, performance, and usability to make experimentation even smoother.
Feedback & Next Steps
Your input is invaluable! If you have thoughts on the Velocity Report (Beta) or any other updates, we'd love to hear from you.
Original source
Stay tuned for more improvements in the next release!
This is the end. You've seen all the release notes in this feed!