ABsmartly Release Notes

Last updated: Jan 15, 2026

  • Dec 1, 2025
    • Date parsed from source:
      Dec 1, 2025
    • First seen by Releasebot:
      Jan 15, 2026
    ABsmartly logo

    ABSmartly by ABsmartly

    December 2025

    Big metrics governance update unlocks new metric categories, richer metadata, and a redesigned metric view to help you find and manage the right metrics. It also introduces metric versioning, usage insights, and smarter selection in experiments for stronger governance.

    Overview

    This release is all about Metrics. As part of our broader initiative to improve metric governance, we’ve introduced powerful new capabilities to help you better manage, understand, and select the right metrics for your experiments.

    General improvements

    We've made some general improvements to Metrics that you will see across the platform.

    New Metric Categories type

    We've added a new configuration type that helps categorise and group metrics. Those new metric categories will make it easier to find the right metrics when creating an experiment.

    While the categories should reflect your own needs, here is a list of possible metric categories you can add to your ABsmartly:

    • Conversion: Measures whether users complete a desired action.
    • Revenue: Captures direct monetary impact.
    • Engagement: Reflects how actively users interact with the product.
    • Retention: Shows whether users come back or continue using the product over time.
    • Performance: Measures speed and responsiveness, such as load time or latency.
    • Reliability: Tracks stability and correctness, including errors, failures, or availability.
    • Quality: Represents outcome quality or user experience signals like cancellations, refunds, or unsuccessful outcomes.

    New metric's metadata fields

    We've added new metadata fields to metrics that help with discoverability and filtering across the platform. This includes:

    • Unit type: This is the list of Unit type(s) for which this metric is computed. Setting the correct Unit type(s) will help experimenters choose the right metric for their experiments. (e.g. user_id, device_id)
    • Application: This is the list of Application(s) where this metric makes sense. For example, an app_crashes metric only makes sense for experiments running on app platforms.
    • Metric category: This is the category the metric belongs to. This will make your metric more discoverable. See above.
      All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.

    Metric View page

    You can now click on the name of any metric across the platform to open the metric's view page. This page will give you a readable overview of the metric and will be the new entry point for managing metrics (editing and creating new versions) as well as many new upcoming features.

    Improved Metric Discoverability

    We’ve made it easier to find, understand, and select the right metrics when creating your experiments/templates/features.

    Usability improvement

    We totally redesigned the metric selection step of the experiment setup. The goal of the new UI is to make it easier to find and add the right metrics for your experiments.

    Smarter metric selection in experiments

    The metric selection step will show by default the most relevant metrics based on the chosen unit type and application (make sure to update your metric metadata to get the most out of this new feature).
    Metrics can now also easily be searched by name, tags, owners, etc so you don't have to scroll through your long list of existing metrics to find what you are looking for.

    Usage insights

    While adding metrics to your experiments/templates/features, you can now see how often a metric has been used in past experiments to help you assess its relevance and importance.

    TIP

    To get the most out of these improvements, we recommend reviewing your existing metrics, filling in missing metadata, and adding clear descriptions where needed.

    Metric Versioning (Foundations)

    A key part of metric governance is version control, ensuring that metric definitions are transparent, traceable, and stable over time. This release lays the groundwork for more robust version management in the future.
    Metric versioning is a critical part of metric governance as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.

    Metric versioning 1.0

    It is now possible for metric owners to create a new version of an existing metric. This can be done, for example, when the definition of a metric change.

    • Creating a new version of a metric will not impact past and running experiments/features which are using a previous version of that metric.
    • Only the latest version of a metric will be discoverable and can be added to new experiments. Experimenters will only be able to see the latest version of each metric.
    • Experiments/Features cannot be started when they use an outdated version of a metric. Experimenters will be asked to update to the latest version before they can start the experiment/feature.

    Edit vs New Version

    With the launch of metric versioning, some fields can be edited in the current version of the metric while others will require a new version to be created.

    • Editable fields: Fields like Description, Tags, Category, Applications, Tracking units can safely be updated without changing the definition of a metric.
    • Non-editable fields: All other fields which might have an impact on how the metric is computed or how the result might be interpreted cannot be edited and a new version of the metric will need to be created to be able to change them.
      As a metric owner, you will be able to edit and create new version from the new Metric view page.

    CAUTION

    If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
    A new end-point for creating new metric versions is now available if needed.

    What’s Next

    We’re continuing our focus on general metric improvements and metric governance in the coming sprints. Upcoming improvements include:

    • CUPED support
    • Metric lifecycle
    • Metric approval workflows
    • Metric usage overviews and reporting
      These updates are part of our broader effort to improve trust, transparency, and governance around metrics.

    Questions or Feedback?

    As always, if you have questions about this release or want to talk about how to get more out of your metrics, reach out to us anytime.

    Original source Report a problem
  • Nov 1, 2025
    • Date parsed from source:
      Nov 1, 2025
    • First seen by Releasebot:
      Jan 15, 2026
    ABsmartly logo

    ABSmartly by ABsmartly

    November 2025

    ABsmartly unveils a LaunchPad Chrome Extension in Beta to simplify building experiments with a visual editor, plus a new Ownership & Permissions model for sharing assets. This release also tightens experiment search and teases data retention rules.

    Overview

    This new release is packed with new features and improvements to help you manage your experimentation program and make it easier to create simple experiments without the need for developers

    The ABsmartly Chrome Extension - The ABsmartly LaunchPad Beta

    We made it easier to create simple experiments using our new ABsmartly LaunchPad. This Chrome Extension makes it possible to create experiments using our new Visual Editor and without the need for developers. While the Chrome Extension is still in Beta, we encourage you to give it a try and give us feedback.
    Before you get started see our guides on getting started with the LaunchPad and creating your first experiment.
    This first release is only the first step as we have big plans for the ABsmartly LaunchPad in 2026.

    Ownership & Permissions model - Sharing of Assets

    In this release, we make it possible to share experiments, features, templates, goals and metrics with users and teams. This feature makes it easier for teams to collaborate and use assets without having to share ownership of that asset.
    This was the last building block in our Ownership & Permission model. Some changes will be necessary on your side before you can fully make use of this new model. We encourage you to read our how-to guide and to reach out to us if necessary so we can jump on a call and guide you through those changes.
    We want to make things better and future-proof for you. We realise that we introduced a lot of changes but unless you take action on all the action items described above then nothing should change (except for the Team Ownership which is now the default but which can be disabled in your settings) and you will be able to keep using the platform like you use it today.
    When you are ready to start making use of the new functionalities, let us know and if you need help for example with some of the steps described above.

    Improved experiment search list

    We've improved the filters on the experiments list to make it easier to find what you are looking for and to align with how experiments are filtered on the Velocity Report. Here is a list of the changes we made to the search filters & search list:

    • Added a new Type filter so you can differentiate between experiment and full on instances. You can now easily find running or stopped experiment without seeing the full on instances.
    • Added Completed, Not Completed and Aborted to the State filter to align with how experiments are filtered on the velocity reports.
    • Removed Running - Not full on from the State filter, as you can now find running experiments by selecting experiment in the Type filter.
    • Renamed the Significance filter to Result and changed it values to Insignificant, Negative, Positive to avoid confusion about the states and statistical significance of the experiments
    • Fixed the issue where reporting on the impact of GST experiments was not correct on the experiment list.
    • Fixed an issue so the color of the impact of inconclusive experiments is gray as on the experiment's overview page.

    Data Retention Update

    As part of our ongoing work on performance and cost optimisation we are planning to introduce the following data retention rules in a future update :

    • 1 year retention for goal, exposure and user attribute data. While experiment results will always remain available and visible, this change means that you won't be able to explore metrics and to slice and dice data for experiments which are stopped for 1 year or more.
      Do you think this will impact you? If yes, get in touch so we understand your need for this data.

    Questions or Feedback?

    As usual, please let us know what you think, and if you have any questions, please get in touch.

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from ABsmartly and hundreds of other software products.

  • Oct 1, 2025
    • Date parsed from source:
      Oct 1, 2025
    • First seen by Releasebot:
      Oct 24, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    October 2025

    The release introduces a team-level ownership model with roles like Team Viewer, Contributor, Admin, and Base User, shaping access to experiments and assets. It also adds fixes and usability tweaks, plus an exposure ignore tag and export fix. A fair-usage payload cap is announced for the next update.

    The main focus of this release is the introduction of our new ownership model, specifically team-level roles and permissions. We’ve also included a number of small improvements and bug fixes to improve usability and platform flexibility.

    Team-Level Roles and Permissions

    You can now assign users to teams and grant them team-level roles — such as Team contributor, Team viewer, or Team admin — that define how they interact with experiments, metrics, goals, templates, and features owned by that team.

    This release is an intermediate step toward the full rollout of our new ownership model. Unless you explicitly add users to teams and assign them roles, there should be no changes in behavior and everything will continue working as before.

    If you encounter any issues or inconsistencies related to roles and permissions, please contact us.

    What's New

    We have introduced four new built-in roles:

    • Team Viewer
      Can view all experiments, features, templates, metrics, and goals owned by the team.
    • Team Contributor
      Can do everything a Team Viewer can, plus create/edit experiments, features, templates, metrics, and goals within the team and manage the life cycle (start, stop) of those experiments and features.
    • Team Admin
      Can do everything a Team Contributor can, and also manage the team — including adding/removing members, assigning roles, editing metadata, etc.
    • Base User
      This will replace the current default "User" role. Following the principle of least privilege, this new role provides read-only access to most components, but does not include access to experiments, features, templates, metrics, or goals — which will be managed at the team level.

    These roles are immutable, but custom roles (e.g. "Metric Owner", "Experiment Reviewer") will be supported in future releases to meet your unique needs.

    All existing global roles remain valid and can be assigned at the Global Team level, which represents your organisation. Team role permissions are inherited: roles granted at any level apply to all child teams.

    If you haven't started yet, you can begin managing your team hierarchy at: /settings/teams

    If you have any questions or want to learn how to get the most from this model, feel free to reach out!

    Other Notable Improvements and Fixes

    • Ignore exposure events
      You can now tag exposure events with a __ignore attribute to have them excluded from experiment analysis. This is useful for excluding internal users from your experiments.
    • Experiment export fix
      Fixed an issue where re-exporting experiment data for an existing export request would break the export process.
    • Metrics list UI cleanup
      Improved the layout of the metrics list page to avoid overly long description text dominating the screen.

    Upcoming Change: Fair Usage Policy Update

    In our next release, we plan to introduce a 2KB limit on the JSON payload sent with goal and exposure events. This change is needed to keep a better control over cost and performance of the ABsmartly platform. Events whose payload is above the threshold will not be processed by the ABsmartly platform. While this limit is way over what most of our customers are currently using, we will query past events and reach out to you separately if we see that you have used larger payloads in the past so we can better prepare for this upcoming change.

    If you have any concerns or use cases that require larger payloads, please let us know as soon as possible.

    Questions or Feedback?

    As always, if you have questions about this release or anything else, don't hesitate to get in touch.

    Original source Report a problem
  • Sep 1, 2025
    • Date parsed from source:
      Sep 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    September 2025

    Summary: Focused performance and stability update with small product changes. We now exclude data older than 30 days by default, slow-exp updates reduced, and fix: table sorting, GST data switch, activity tab caching, and adding favourites.

    This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.

    Performance Improvements

    To improve platform performance and reduce unnecessary load, we have made the following changes:

    • Feature flag: We no longer fetch data older than 30 days by default
    • Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days

    These changes help improve performance without impacting the accuracy or freshness of recent results.

    Notable Bug Fixes

    • Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
    • Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
    • Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
    • Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.

    Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!

    Original source Report a problem
  • Sep 1, 2025
    • Date parsed from source:
      Sep 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    September 2025

    This release focuses on performance tweaks and bug fixes to improve usability while teams work on bigger backend updates. Key changes include skipping data older than 30 days by default, slowing updates for long-running experiments, and fixes to table sorting, GST data visibility, activity caching, and adding experiments to favourites.

    This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.

    Performance Improvements

    To improve platform performance and reduce unnecessary load, we have made the following changes:

    • Feature flag: We no longer fetch data older than 30 days by default
    • Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days

    These changes help improve performance without impacting the accuracy or freshness of recent results.

    Notable Bug Fixes

    • Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
    • Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
    • Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
    • Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.

    Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!

    Original source Report a problem
  • Sep 1, 2025
    • Date parsed from source:
      Sep 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    September 2025

    This release centers on performance improvements and bug fixes. Default data fetch skips data older than 30 days and long-running experiments update slower, boosting speed without harming recent results. Notable fixes: table sorting, GST data clarity, activity tab caching, and favourites.

    This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.

    Performance Improvements

    To improve platform performance and reduce unnecessary load, we have made the following changes:

    • Feature flag: We no longer fetch data older than 30 days by default
    • Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days

    These changes help improve performance without impacting the accuracy or freshness of recent results.

    Notable Bug Fixes

    • Table sorting restored: Sorting on list tables has been re‑enabled after being unintentionally removed in a previous release
    • Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
    • Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
    • Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.

    Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!

    Original source Report a problem
  • Sep 1, 2025
    • Date parsed from source:
      Sep 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    September 2025

    This release brings targeted performance tweaks and reliability fixes aimed at clarity and usability. Key changes include default data fetch 30-day cutoff, reduced update frequency for long-running experiments, restored table sorting, clarified GST data handling during interim analyses, activity tab caching fixes, and reliable add-to-favorites functionality.

    This release includes a number of small but meaningful improvements and bug fixes. While the team continues to work on major backend updates, this update focuses on performance optimisations and fixes that improve clarity and usability.

    Performance Improvements

    To improve platform performance and reduce unnecessary load, we have made the following changes:

    • Feature flag: We no longer fetch data older than 30 days by default
    • Long-running experiments: We have reduced how frequently data is updated for experiments running longer than 60 days

    These changes help improve performance without impacting the accuracy or freshness of recent results.

    Notable Bug Fixes

    • Table sorting restored: Sorting on list tables has been re-enabled after being unintentionally removed in a previous release
    • Improved clarity for GST experiments: To avoid confusion during interim analyses, the GST data switch is now disabled until the experiment is complete. Previously, we showed non-GST-adjusted data when no boundary had been crossed, which led to misinterpretation.
    • Caching issues on the Activity tab fixed: We addressed some caching issues on the activity tab where data would not update until the page was refreshed.
    • Add to favourites fixed: Resolved an issue where adding experiments to favourites did not always work.

    Please let us know if you have any questions or feedback — and stay tuned for more significant changes coming soon!

    Original source Report a problem
  • Aug 1, 2025
    • Date parsed from source:
      Aug 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    August 2025

    This release enhances ownership management with flexible owners and inclusive filters, auto refresh for estimates, improved graph zoom behavior, and GST estimate accuracy fixes to boost experimentation reliability and collaboration.

    This release brings several UX improvements and bug fixes aimed at making experimentation and collaboration smoother and more reliable. We’ve made owner management more flexible, improved graph interactions, and fixed key issues around estimations and data tables.

    Improved Owner Filters and Flexibility

    • An owner can now be a single user or a team.
    • The Owner filter now accepts both users and teams when filtering, making results more inclusive.

    Bug Fixes and Usability Improvements

    • MDE and Max Runtime Automatic Estimation Refresh: Estimates now update automatically when the standard deviation changes. We have also improved estimation precision.
    • Graph Zoom Behavior: Y-axis now updates correctly when zooming in, giving you a clearer view of trends.
    • GST Estimate Accuracy: The next GST analysis estimate now correctly respects the minimum time between analyses.

    Please let us know if you have any questions or feedback — we're always happy to hear from you.

    Original source Report a problem
  • Jul 1, 2025
    • Date parsed from source:
      Jul 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    July 2025

    A new built-in notification system keeps teams aligned with alerts on team changes, experiment lifecycles, and feature toggles, plus a fresh Impact per Day metric to gauge real-world value. The Metrics Overview tab exits beta with a cleaner layout, consolidated results, and more actionable insights for faster, informed decisions.

    This release introduces a brand-new notification system to help teams stay on top of what matters, along with the official launch of the improved Metrics Overview tab. These updates are designed to make your workflow more informed, timely, and collaborative.

    New Feature: Built-In Notifications

    Never miss a key event again. Our new built-in notification system ensures you're always up to date with what's happening on the platform.

    • Team assignments: Get notified when you're added to or removed from a team
    • Experiment/feature lifecycle updates: Get alerts when an experiment starts, stops, is put Full On, or when a feature is turned on/off.

    More alert types will be added in upcoming releases — stay tuned!

    New Feature: Impact per Day Estimates

    We've added new Impact per Day estimates to all relevant metric types in the experiment overview to help you better understand the real-world significance of your experiment results. This gives you a clearer picture of the daily impact your changes could have when rolled out, making it easier to evaluate their practical value.

    Metrics Overview Tab Out of Beta

    Thanks to all your feedback, the new Metrics Overview tab is now officially out of beta.

    • Cleaner layout and better tooltips and timestamps
    • Consolidation of GST and Fixed Horizon experiment results in a single table
    • Display Impact per Day for all relevant metrics
    • More actionable information at a glance

    These changes are designed to help you make faster, more informed decisions when reviewing experiment results.

    Please let us know if you have any questions or would like to discuss future features with us.

    Original source Report a problem
  • Jun 1, 2025
    • Date parsed from source:
      Jun 1, 2025
    • First seen by Releasebot:
      Sep 27, 2025
    ABsmartly logo

    ABSmartly by ABsmartly

    June 2025

    The latest release adds Team Management to map your org, a one-click export for experiment data, and improved Metrics/Goals lists for easier discovery. It also includes broader stability fixes and performance tweaks to streamline experimentation at scale.

    Our latest release introduces new features to support better collaboration, data access, and metric/goal discoverability. With foundational Team Management features and a faster way to download experiment data, this update sets the stage for a more streamlined and scalable experimentation experience.

    New Feature: Team Management (Foundations)

    Teams are now much more than just metadata. You can properly define and manage teams within the platform. This update allows you to map your internal organisational structure directly into the product by:

    • Creating and managing teams
    • Assigning users to teams
    • Structuring teams hierarchically (e.g. parent and child teams)

    This is the first step toward upcoming improvements around ownership, permissions, and team-level collaboration. Go to Settings > Teams to get started with managing your team and inviting your team members. If you already have teams defined in ABsmartly, you can simply move them to the right place in your org structure. Check our wiki page for more information on creating and managing teams in ABsmartly.

    New Feature: One-Click Experiment Data Export

    We added an Export data button to the experiment overview page, allowing you to export participant exposures and goal event data for that experiment with a single click.

    • No need to navigate through the events page or APIs
    • Directly download all experiment-related data with a single click
    • Makes it easier for teams to access raw data for custom analysis or deep dives

    From the experiment overview page:

    • Click on ... and select Export data.
    • You will get a confirmation that the request is in progress. The process might take a while depending on the size of the experiment.
    • You will get a notification on the experiment activity page once the export file is ready.
    • The export file will be available for 30 days once it has been created.
      Only users with the "Events:List" permission can trigger the export and download the file.

    Improved Metrics & Goals List Pages

    Improve searchability and filtering (archived vs. unarchived)

    • Improve searchability and filtering (archived vs unarchived)
    • Help you quickly identify and select the metrics and goals that matter

    Other Fixes & Improvements

    This release also includes several smaller enhancements and bug fixes to improve overall platform stability and performance.

    Feedback and Next Steps

    As always, we value your feedback, so please share with us what you would like to see improve in the future.

    Original source Report a problem

Related vendors