- Sep 22, 2025
- Parsed from source:Sep 22, 2025
- Detected by Releasebot:Sep 27, 2025
✍️ Custom Chart Annotations in Drilldown
You can now add manual annotations to Drilldown charts in Metrics Explorer, tag data points with notes, edit them, and reuse annotations across charts when date ranges overlap.
You can now add manual annotations directly to Drilldown charts in Metrics Explorer. This lets you document notable moments in your data and see them again whenever the same metrics are viewed.
What You Can Do Now
- Click any data point on a Drilldown chart to add a custom annotation
- Apply an annotation to the metric you clicked, or extend it to additional metrics
- See annotation icons whenever a chart’s date range and metrics overlap with saved annotations
- Edit existing annotations, including description, date, time, and associated metrics
How It Works
Each annotation is tied to a point in time and one or more metrics. When you load a Drilldown chart that includes both, an annotation icon appears. Click the icon to view or expand the note. You can adjust the description, date, time, and metrics at any point.
Impact on Your Analysis
Annotations help you connect changes in the data to events in the real world. For example, you can tag the day a feature shipped or note an outage that caused a traffic dip. These markers appear on charts whenever the same metrics are analyzed, so you never lose the context.
- Sep 18, 2025
- Parsed from source:Sep 18, 2025
- Detected by Releasebot:Sep 27, 2025
🔍 Conversion Drivers in Funnels on WHN
Conversion Drivers are now available in Warehouse Native and Cloud. They surface the top factors behind funnel conversions or drop-offs, with per-driver stats, drill‑downs, and one‑click grouping to speed understanding.
Conversion Drivers are now available in Warehouse Native and Cloud. They surface the most significant factors influencing funnel conversions or drop-offs, helping you quickly understand why users convert or drop off.
What You Can Do Now
- Identify high-impact drivers of conversion or drop-off
- Analyze event properties, user properties, and intermediary events
- View summaries with conversion rate, share of participants, and impact
- Drill into a driver for conversion matrices and correlation coefficients
- Group funnels by any surfaced driver with one click
How It Works
Conversion Drivers analyze columns from the metric source used in the first step of the funnel. For best results, configure your metric source as a multi-event metric source on the setup page and ensure all funnel steps come from that source. From a funnel, click a step and select "View Drop-Off & Conversion Drivers." You'll see a ranked list of factors with conversion likelihood, conversion rates, and share of participants. Clicking into a factor opens detailed comparisons and lets you regroup the funnel by that property.
Impact on Your Analysis
Funnels show what your conversion rate is. Conversion Drivers explain why, so you can investigate drop-offs, explore new funnels, and validate which user groups or behaviors matter most.
Available Now
Conversion Drivers are available now for all Warehouse Native customers. For Cloud customers, read more about how Conversion Drivers work on Cloud.
- Sep 16, 2025
- Parsed from source:Sep 16, 2025
- Detected by Releasebot:Sep 27, 2025
🚢 Custom Experiment Decision Framework
We’ve expanded the Decision Framework feature beyond templates. Now, you can directly configure and manage decision frameworks for each experiment. This gives teams a place to codify decision-making so that users can quickly move to action at the conclusion of an experiment.
To add a decision framework to your experiment select “Add Decision Framework” from the experiment menu.
- Sep 15, 2025
- Parsed from source:Sep 15, 2025
- Detected by Releasebot:Sep 27, 2025
✅ Personal Console API Key
Statsig rolls out personal Console API keys that are automatically scoped to each user’s role, with owner-traced usage and audit logs. Admins can control key generation in org settings, helping multi-user projects, security, and compliance. The release is complemented by strong customer praise.
Personal Console API Keys
You can now generate personal Console API keys in Statsig. These keys are automatically scoped to your role, ensuring the same access restrictions you already have. Each key is tied to its owner, making it easier to track usage and maintain clean audit logs.
Why it matters:- Simplifies multi-user projects by giving every user their own key
- Provides clear ownership visibility for better security and compliance
- Admins can control the ability to generate personal keys in the organization settings
Loved by customers at every stage of growth
See what our users have to say about building with Statsig "Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users." Paul Ellwood Head of Data Engineering "We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration . We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion." Don Browning SVP, Data & Platform Engineering "Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate." Rami Khalaf Product Engineering Manager "Statsig has enabled us to quickly understand the impact of the features we ship." Shannon Priem Lead PM "I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig." Partha Sarathi Director of Engineering "Working with the Statsig team feels like we're working with a team within our own company." Jeff To Engineering Manager "[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed." Matteo Hertel Founder "Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team." Chris Beaumont Data Scientist "The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights." Preethi Ramani Chief Product Officer "We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform." Berengere Pohr Team Lead - Experimentation "Statsig is a powerful tool for experimentation that helped us go from 0 to 1." Brooks Taylor Data Science Lead "We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics." Ahmed Muneeb Co-founder & CTO "Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history." Zachary Zaranka Director of Product "Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team." David Sepulveda Head of Data "Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast . It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly." Karandeep Anand President "We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig ." Partha Sarathi Director of Engineering "Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation." Joel Witten Head of Data "We realized that Statsig was investing in the right areas that will benefit us in the long-term." Omar Guenena Engineering Manager "Having a dedicated Slack channel and support was really helpful for ramping up quickly." Michael Sheldon Head of Data "Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis." Elaine Tiburske Data Scientist "We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team." Paul Frazee CTO "We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire." Nick Carneiro CTO "We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence." Wendy Jiao Staff Software Engineer "We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques." Carlos Augusto Zorrilla Product Analytics Lead "We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders." Alessio Maffeis Engineering Manager "Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers." Toney Wen Co-founder & CTO "We finally had a tool we could rely on, and which enabled us to gather data intelligently." Michael Koch Engineering Manager "At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us ." Mengying Li Data Science Manager "At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently . Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities." Dave Cummings Engineering Manager, ChatGPT "Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test." Andy Glover Engineer "We knew upon seeing Statsig's user interface that it was something a lot of teams could use." Laura Spencer Chief of Staff "The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases." Evelina Achilli Product Growth Manager "Statsig is my most recommended product for PMs." Erez Naveh VP of Product "Statsig helps us identify where we can have the most impact and quickly iterate on those areas." John Lahr Growth Product Manager "With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences." Jared Bauman Engineering Manager - Core ML "In my decades of experience working with vendors, Statsig is one of the best." Laura Spencer Technical Program Manager "Statsig is a one-stop shop for product, engineering, and data teams to come together." Duncan Wang Manager - Data Analytics & Experimentation "Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!" Todd Rudak Director, Data Science & Product Analytics "For every feature we launch, Statsig saves us about 3-5 days of extra work." Rafael Blay Data Scientist "I appreciate how easy it is to set up experiments and have all our business metrics in one place." Paulo Mann Senior Product Manager
- Sep 10, 2025
- Parsed from source:Sep 10, 2025
- Detected by Releasebot:Sep 27, 2025
🎶 API Endpoints for archiving and unarchiving Dynamic Configs
Dynamic Configs Archive and Unarchive API Endpoints
We've added two more endpoints to our Console API for Dynamic Configs. Now you can archive and unarchive a Dynamic Config in your project programmatically!
Access the endpoints here: https://docs.statsig.com/console-api/dynamic_configs
- Sep 5, 2025
- Parsed from source:Sep 5, 2025
- Detected by Releasebot:Sep 27, 2025
🧪 Sampling Across All Charts
You can enable sampling in major chart types to speed up large-dataset queries with directionally accurate results. User- and event-level sampling, toggle on/off, defaults off, active under high-volume thresholds. Early tests show substantially faster results with a minor precision tradeoff.
What You Can Do Now
- Use user-level sampling in Funnel, Distribution, Retention, and User Journey charts
- Use event-level sampling in Metric Drilldown
- Toggle sampling on or off in chart settings
- See when sampling is active, and disable it at any time for exact results
How It Works
Sampling is off by default. When toggled on, it only applies under high-volume conditions:
- Warehouse Native: Sampling applies if metric sources exceed 100K rows/day or row counts can’t be determined. For User Journeys, sampling is always applied when toggled on.
- Cloud: Sampling applies if the event volume in the query exceeds 100K. For Journeys, we look at total event volume across the company.
In Drilldown, event-level sampling is used for high-volume events unless the variance is too high, in which case we fall back to full data.
Impact on Your Analysis
Sampling helps you move faster through exploratory workflows. In early results, User Journey query times dropped by over 60% when sampling was applied.
It’s a small precision tradeoff for a much faster iteration loop.
- Aug 21, 2025
- Parsed from source:Aug 21, 2025
- Detected by Releasebot:Sep 27, 2025
🧪 Analyze Exposures in Metrics Explorer (Warehouse Native)
Experiment exposure events are now supported in Metrics Explorer on Warehouse Native. You can select them like any other event, filter or group by properties (variant, metadata), and tie rollout data directly to product metrics.
More details here: Exposures in Metrics Explorer
- Aug 20, 2025
- Parsed from source:Aug 20, 2025
- Detected by Releasebot:Sep 27, 2025
✅ Verified Cohorts and Dashboards
Admins can mark cohorts and dashboards as verified to designate official versions, prevent edits to them, and clone verified items to create editable copies—establishing a single source of truth while still enabling personal exploration.
Admins can now mark specific cohorts and dashboards as verified. This signals that they are the trusted, official versions while also protecting them from accidental edits.
What You Can Do Now
- Mark cohorts and dashboards as verified to indicate they are the approved versions
- Prevent edits to verified entities unless you are an admin
- Clone verified cohorts and dashboards to create your own editable versions
How It Works
- Cohorts: Mark as verified when creating a new cohort or by editing an existing one
- Dashboards: Mark as verified from the settings cog in the top right of the dashboard page
Impact on Your Analysis
Teams can align on a single source of truth for key cohorts and dashboards while still allowing individuals to explore their own versions without risking changes to the verified originals. This keeps shared analysis reliable and consistent.
- Aug 20, 2025
- Parsed from source:Aug 20, 2025
- Detected by Releasebot:Sep 27, 2025
🎶 Pre-Post Results
Statsig unveils Pre-Post Results, a Cloud-only feature that lets you compare user metrics before and after a full 0→100% rollout (or last 30 days threshold) even without a control group. It auto-detects rollout completion and shows directional impact, aiding fast, partial-feature launches.
Pre-Post Results on Feature Gates
This feature is currently available only on Statsig Cloud and is not yet supported on Warehouse Native. Sometimes, you don’t have the luxury of launching a feature partially to your user population (e,g, X% of the users). Maybe you had to ship something immediately, rolled out a backend improvement to all users, or made a change you can’t ethically hold back from part of your audience. That’s where Pre-Post Results comes in. With Pre-Post Results, you can:
- Compare metrics before and after a feature reaches 100% rollout
- See the directional impact on key outcomes, even without a control group
Statsig automatically detects when a feature gate has been rolled out to all users (0 → 100% or started at 100% in the last 30 days). It then compares the same users’ behavior before and after rollout, showing you whether your feature moved the needle. To read more about our computational methodology, read Statsig Docs.
Loved by customers at every stage of growthSee what our users have to say about building with Statsig
"Statsig's experimentation capabilities stand apart from other platforms we've evaluated. The ease of use, simplicity of integration help us efficiently get insight from every experiment we run. Statsig's infrastructure and experimentation workflows have also been crucial in helping us scale to hundreds of experiments across hundreds of millions of users." - Paul Ellwood, Head of Data Engineering "We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion." - Don Browning, SVP, Data & Platform Engineering "Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate." - Rami Khalaf, Product Engineering Manager "Statsig has enabled us to quickly understand the impact of the features we ship." - Shannon Priem, Lead PM "I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig." - Partha Sarathi, Director of Engineering "Working with the Statsig team feels like we're working with a team within our own company." - Jeff To, Engineering Manager "[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed." - Matteo Hertel, Founder "Statsig has been an amazing collaborator as we've scaled. Our product and engineering team have worked on everything from advanced release management to custom workflows to new experimentation features. The Statsig team is fast and incredibly focused on customer needs - mirroring OpenAI so much that they feel like an extension of our team." - Chris Beaumont, Data Scientist "The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights." - Preethi Ramani, Chief Product Officer "We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform." - Berengere Pohr, Team Lead - Experimentation "Statsig is a powerful tool for experimentation that helped us go from 0 to 1." - Brooks Taylor, Data Science Lead "We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics." - Ahmed Muneeb, Co-founder & CTO "Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history." - Zachary Zaranka, Director of Product "Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team." - David Sepulveda, Head of Data "Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly." - Karandeep Anand, President "We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig." - Partha Sarathi, Director of Engineering "Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation." - Joel Witten, Head of Data "We realized that Statsig was investing in the right areas that will benefit us in the long-term." - Omar Guenena, Engineering Manager "Having a dedicated Slack channel and support was really helpful for ramping up quickly." - Michael Sheldon, Head of Data "Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis." - Elaine Tiburske, Data Scientist "We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team." - Paul Frazee, CTO "We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire." - Nick Carneiro, CTO "We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence." - Wendy Jiao, Staff Software Engineer "We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques." - Carlos Augusto Zorrilla, Product Analytics Lead "We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders." - Alessio Maffeis, Engineering Manager "Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers." - Toney Wen, Co-founder & CTO "We finally had a tool we could rely on, and which enabled us to gather data intelligently." - Michael Koch, Engineering Manager "At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us." - Mengying Li, Data Science Manager "At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities." - Dave Cummings, Engineering Manager, ChatGPT "Statsig has helped accelerate the speed at which we release new features. It enables us to launch new features quickly & turn every release into an A/B test." - Andy Glover, Engineer "We knew upon seeing Statsig's user interface that it was something a lot of teams could use." - Laura Spencer, Chief of Staff "The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases." - Evelina Achilli, Product Growth Manager "Statsig is my most recommended product for PMs." - Erez Naveh, VP of Product "Statsig helps us identify where we can have the most impact and quickly iterate on those areas." - John Lahr, Growth Product Manager "With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences." - Jared Bauman, Engineering Manager - Core ML "In my decades of experience working with vendors, Statsig is one of the best." - Laura Spencer, Technical Program Manager "Statsig is a one-stop shop for product, engineering, and data teams to come together." - Duncan Wang, Manager - Data Analytics & Experimentation "Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!" - Todd Rudak, Director, Data Science & Product Analytics "For every feature we launch, Statsig saves us about 3-5 days of extra work." - Rafael Blay, Data Scientist "I appreciate how easy it is to set up experiments and have all our business metrics in one place." - Paulo Mann, Senior Product Manager
- Aug 19, 2025
- Parsed from source:Aug 19, 2025
- Detected by Releasebot:Sep 27, 2025
🥾Easier Bootstrapping in React Apps
Statsig adds server-side bootstrapping for the client SDK with a new StatsigBootstrapProvider. This hides setup plumbing in a server component, enabling immediate SDK readiness and simpler Next.js integration right from a Layout.tsx.
Bootstrapping generates the values for a Statsig client SDK on a server that you manage - most commonly on your web server, so that you don’t have to make an extra network request for the SDK to be ready.
This means the Statsig SDK will be ready immediately, making your page more responsive and supporting metrics like your Core Web Vitals, and in turn your SEO.
Historically - Bootstrapping was tricky to setup in React, requiring a couple different server and client-side functions. We’ve now introduced the StatsigBootstrapProvider, which hides all of the necessary plumbing inside a server-side component so you can add a single component in your Layout.tsx to set everything up.
We're starting with support for Next.js apps, checkout the docs to get started!