On the podcast: the cost of not tracking your experiments and decisions, how refunds and chargebacks quietly erase your paywall wins, and why stacking A/B test wins should compound your growth, but almost never does.
This conversation is shorter than usual and will be featured in RevenueCat’s State of Subscription Apps report. Each episode in this series will explore one crucial topic and share actionable insights from top subscription app operators.
Top Takeaways:
💸Map your revenue history before running new experiments
Chart revenue across new subscribers, upgrades, renewals, and win-backs over time. Matching spikes and dips to past decisions reveals what actually moved the business and prevents you from re-learning expensive lessons.
🤫 Refunds and chargebacks are silent killers
A paywall “win” can quickly become a net negative if you aren’t tracking the downstream effects of cancellations, refunds, and chargebacks, which often hide the true cost of a seemingly successful experiment.
📈If your A/B test wins aren't showing up in top-line growth, something is wrong
Stacking 5% and 10% experiment wins should compound, but many teams see modest growth despite a long list of "winners". Set calendar reminders to recheck winning cohorts at 3 and 6 months, especially for price changes, to catch lifts that don't hold.
About Sara Grana:
🚀 Revenue Strategy Lead at Yousician, a revolutionary music platform for anyone to learn, play, create, and teach music.
Follow us on X:
David Barnard - @drbarnard
Jacob Eiting - @jeiting
RevenueCat - @RevenueCat
SubClub - @SubClubHQ
Episode Highlights:
[0:00] Introduction to Sara Grana, Revenue Strategy Lead at Yousician
[1:05] The importance of tracking experiments and business decisions in subscription apps
[2:19] Mapping revenue and understanding its evolution across different user segments
[3:06] Tracking revenue changes and connecting them to business decisions
[4:34] The pitfalls of focusing too much on early funnel metrics and ignoring long-term impacts
[5:26] The impact of chargebacks and refunds on paywall performance and customer retention
[7:20] Why understanding downstream effects is crucial for making smart pricing decisions
[8:44] The challenges and opportunities of introducing new subscription plans (e.g., lifetime subscriptions)
[9:43] How commercial strategy influences churn rates and renewals
[13:13] The importance of rechecking experiments after months to measure long-term impact
[14:52] Sara's advice on when to revisit experiments based on their impact on pricing and user behavior
[15:49] Tracking cohort data for subscription retention and understanding renewal trends
[16:21] Why surprising lifts in experiments may require deeper investigation
[17:13] The mismatch between short-term experiment results and long-term growth expectations
[18:02] Final thoughts on driving sustainable growth, tracking, and adapting strategies over time
David Barnard:
Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat. Thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.
Hello, I'm your host, David Barnard. Today's conversation is shorter than usual and will be featured in RevenueCat's State of Subscription Apps report. Each episode in this series will explore one crucial topic and share actionable insights from top subscription app operators. With me today, Sara Grana, who works on revenue strategy at Yousician. On the podcast, I talk with Sara about the cost of not tracking your experiments and decisions, how refunds and chargebacks quietly erase your paywall wins, and why stacking A/B test wins should compound your growth, but almost never does. Hey Sara, thanks so much for joining me on the podcast today.
Sara Grana:
Thanks for having me, David.
David Barnard:
So you spent almost seven years at Babbel and recently transitioned to Yousician, and one of the things you told me when we were preparing for this was that the first thing you did at Yousician was you asked, "Where's your log of experiments? Where's your log of business decisions?" I don't think a lot of companies keep that kind of record. So why was that the first thing you asked and how do you recommend doing that?
Sara Grana:
So when I start a company and also I started at Babbel in my role in revenue strategy, I really look at, okay, what is the map of our revenue over the years? So from your revenue, we come from subscription business. You can either get money from people that never had the subscription and start a new subscription, people that upgraded from a subscription to another, people that renew a subscription or people that used to have subscription then churned and then came back. So these four buckets, you have these four buckets. Having the history of how these four buckets evolve can also tell you a lot. So when sometimes you cannot find experiments or whatnot, you can see, "Oh, I see that from February 23, all of a sudden, the new subscriber revenue went really, really big, what happened?" And they're like, "Oh, yeah, this is when we started the lifetime subscription." Or this type of thing.
So sometimes it's about finding... Yeah, it's great they have a log and then you can go through experiments, but you need to differentiate, okay, what is important versus not, because sometimes some companies run a lot of experiments, so it's not really useful to go through all of it. So I would also recommend, let's map the revenue, understand what are the big difference that you see, and then try to map, okay, something happened here, what happened? And then that will tell you a lot about the history and how things work together also in that particular company or sector.
David Barnard:
How would you recommend actually tracking that or in your ideal state, is that just in a Notion Doc in a Google Doc or is it in a spreadsheet and the revenue changes that happen? What's the ideal state of something like that look like?
Sara Grana:
Basically, the revenue bucket, so to say, to me, it's like Excel. I would track in Excel how we're moving, and then from that, maybe have a little presentation for me or then to share and then you map, okay, "Here, this line, this happened because of this thing." And then you can link to whatever documentation that assist the company, or you can put within the same Excel file if you stay in Excel, the links of what happened when. But yeah, I'm a bit of an Excel person. But yeah, everyone needs to find their own way.
David Barnard:
When you're trying to optimize conversion, retention, when you're working on, as you do, revenue strategy at a company, all these little things add up to become the product in a way that is hard to untangle from each other. And so having this record means that you know when you ran this experiment, you can then go back and look at that cohort, did it churn at a higher rate? Did they engage in the app at a different rate? How did they engage in the app differently? And so all of that leads to making better decisions over the long haul and being able to look back historically and understand those decisions and maybe retest some of the assumptions from the past. But one of the big things is conversion over optimization. I think a lot of apps are falling into this trap these days of getting too focused on early funnel metrics at the expense of down funnel metrics. So what are some of the pitfalls you've seen in that?
Sara Grana:
I think there's two major things that I keep seeing over and over again. One of them is this not letting enough time for the cohort to evolve and see what happens later. What you mentioned, like what happens with the renewal rates. Like okay, we have a price increase. Oh, great, we did amazing with all this. And then six months later, you look at the cohort and say, "Oh, actually, the control group actually is performing better than the test group because they are renew more." So one of them is like this not looking even the time. Sometimes it's fine to roll out, it's fine, you have a win, you roll, but then look back. And then the second one is within the same moment in time, the same snippet in time, but not looking at the right metrics. There's sometimes that we forget to look at some metrics. I give a really clear example.
Sometimes with a win-back campaign or people that cancel the auto-renewal, okay, when they do that, we're going to send an offer, right? Okay, they cancel it to renew, we send an offer because then maybe we get them back. Okay, great, we do this. Great, amazing numbers. Okay, you look a bit more deep and you realize, oh, a lot of people are cancel, putting their auto-renewal off right at the beginning, and what they are doing is they're asking for a refund and then they're getting the offer. So you are actually net negative. So I think that's also something that tends to happen, especially with refunds and chargebacks and because people tend to forget about those, they never happen. And yeah, you might be, even without waiting in time, you might be messing up with your system. So you really need to understand your whole set of metrics and how do they all work together? Because most of the cases, something goes up and something goes down and you need to make sure you know what is going on.
David Barnard:
I hear this all the time with price testing where you double the price and you exactly match the revenue. So conversion cuts in half. Sometimes you'll get that win where you double the price and you do get a 25% lift in average revenue per user or something like that. But you always got to be looking for those downstream things. Like you said, one thing goes up, another goes down. When you look at the entire lifecycle of a subscriber, any one movement here can have downstream effects if you're not really carefully looking for it.
And I think we're just in this mode as an industry of chasing payback as quickly as possible, and sometimes it's just a business decision you have to make. You have to make that decision of like we're going to sacrifice long-term revenue for being able to hit that ROAS at day whatever 7, 30, 90, whatever you're targeting. But I think what's really important and what you're hinting at is you got to know what you're sacrificing. You got to know as you push this number up, what number is going down and are you willing to make that trade? Because a lot of times, people aren't tracking those down funnel numbers and don't know the trade they're making. What are some of the specific examples you've seen of that kind of over optimization and how it does impact the long term?
Sara Grana:
It happens also a lot when introducing new plans like when you didn't have a yearly description or when you didn't have the... I mean, lifetime is a huge example because the value that you get from the get-go, it's really high. And then every time time you do a test like lifetime versus something else, always the something else is going to lose because people are not looking at what is the LTV of that something else. If they would have bought a [inaudible 00:08:35] subscription, they are not bringing you the 100 euros, they are bringing you 300 euros, so don't compare it with 250 compare it to 300 and things like that. So I think every time there's a new plan, that always people please look at the lifetime value that this plan would have had. And then another thing that happened, I said before a lot like refunds and chargeback is something that people tend to forget about. So if you are introducing a new payment method and things like this, also, especially look at those methods, look what is happening there.
David Barnard:
Any other specific places you think people should be paying especially close attention to? I know paywall optimization is a huge thing and I've been frankly surprised at how big it moves folks have been able to see on the paywall. Over this past year, one of the really big paywall optimization things that people have been doing is having a toggle to enable the free trial. So by default, you don't have a free trial, you tap a button or slide a slider and enables the free trial. Now Apple has started to reject some of that, so I don't know if that's going to be something that is allowed long term, but I've been shocked at what a big lift that can have. But again, I haven't talked to folks who looked at that cohort six months, 12 months down the line. I mean, just now, I think that started about 12 months ago. So you're going to start seeing those cohorts maturing. Any other things like that that you've seen that you saw very specifically lead to down funnel problems?
Sara Grana:
Sometimes it's not so much what we said like down funnel problem, but more like you are just putting the revenue at the beginning, right? It is just that more revenue than at the end or not. But one thing that happens that is really clear with churn, people usually tend to just think churn is a product thing. Like churn is a product problem, but most of the time, I mean, not most of the time, let's not say most of the time, but a lot of the time, the commercial strategy that you have, what people buy, when they buy it, which price they buy it, are going to have such an enormous impact on your renewals and extensions. And what happens is that people, like in the marketing side, they don't really see in the [inaudible 00:10:53] and then the product side, they also don't have a view on what is happening at the beginning of the funnel in a way.
For example, something that I think is going to happen right now is we have now the web checkout, right? In iOS, you can send people to buy one. What I've seen in different companies is the web renewal rates are significantly higher than the app rate. So what is going to happen now is a lot of teams, a lot of product teams are going be like, "Wow, our product is amazing. We have increased our renewals by whatever, 20%." If they were to slice from what is the cohort of users that bought natively in app, what is the cohort of users that bought through web, maybe the renewal rates are flat and what just happened is you are acquiring users through other means. The same can be said, of course, for the subscription plans. If you have your share of users that buy one-month subscription is bigger or grows for whatever reason, your renewals are also going to get better if you look at the overall number.
So think when you're looking at data like cohort of the different users, what they bought, when they bought it, if discounted or not discounted is also another clear example. Sometimes you do the discount, you bring a lot of revenue, but then those users, they are not going to renew. Or sometimes you do a price increase, you bring more revenue because your conversion goes down, but not as much as the pricing list goes up. But then those users, you have less pool to upgrade users and things like this. So there's a lot of different colors to this problem.
David Barnard:
So what are the specific metrics and ways you dive into the data to better understand that? Are you looking at both subscription, retention, and usage retention of very specific features? How are you watching that entire funnel? What's your preferred way to look at it and what are the metrics you most look at?
Sara Grana:
So what is most important to me is which plan did they buy, discounted or not, and where? Web versus app. Then sometimes, and this is something that you need to look for your company, sometimes the marketing channel might make up for different users and cohorts. So understand what are the different one. And once you have that, then you just look at that. You just look at that cohort. And then if you see that the cohort is going up or down, okay, probably [inaudible 00:13:14]. What usually happens is that the share for those different cohorts is changing over time. And this is what bringing you fluctuation on renewal rates. It might be different when you're doing a specific experiment, that might change that, but I think it's really always good to look at what are those things and make a difference that are upper funnel, as upper funnel as it gets, so to say. And then look for that.
And then when it comes to when I'm looking at a test that I do in the paywall one, then I'm looking at revenue per user of course, but then I'm also having a look always on pre-funds or chargeback. So I always wait a couple of weeks what is happening there because that's like an immediate effect that you can see. And then always after some months, recheck the cohort. What has happened with these people? Sometimes it's a bit frustrating because you have the majority of users buying a 12-month subscription, so you won't see it until later on. But maybe there's something that you can do if you see that the one month behaving in a particular way, maybe you can assume that the same would happen for the 12 months.
You can also look when you have the result, make some scenarios of renewal rates of those different... If the one month was to renew at this 10% less or 5% more, whatever you want based on the result of your test, what would happen. And then you can know once you have some really easy way to check in a way, in a sense. You keep an eye, okay, if this goes below 10, then I know we are in trouble. I know that probably the winner is actually the loser.
David Barnard:
And this gets back to what we started with was keeping a detailed history. Is that when you do run an experiment, do you set an alarm like three months from now?
Sara Grana:
Yeah, my Google Calendar up there has this experiment recheck with my live data analyst. Yeah.
David Barnard:
Gotcha. So you are looking back at specific intervals, and do you do that for... I guess you wouldn't do that for every test. What's the criteria for the things that you're most care to look back on?
Sara Grana:
I think anything that is price changes, I really care about. Because for example, if it's about, oh, the layout is different or the copy is different, then sure, maybe there's a different in renewal by why would it, right? So there's certain things that are probably not. So I don't know. You shouldn't be spending a lot of time. The thing is also, the more you do it, the easier it gets in a way. It's just like another part of the analysis, so to say. It's already set up that way. So you have already a cohort, it's like, okay, you pull back, okay, here. And sometimes it's even with some tools, it's even there that you can just recheck this test how it performed. So it doesn't need to be really time-consuming. But usually, everything that is with price or discounting or thing like this, I would always recheck after, depending on what we are selling three months, six months. And then if you have to wait a year, it's... Yeah.
David Barnard:
It seems like too, maybe the priority would be to make sure you're checking back on the things that made the biggest move. So maybe there was a copy change on the paywall button that did a 25% lift, which sounds crazy, but it happens sometimes where you have these crazy lifts. And so maybe those kind of things would also fall into that bucket of anything that made a surprisingly big lift mark that one to go back on because-
Sara Grana:
Especially if then you are not seen, because something that happens a lot is, oh, we have all the experiments all positive, but then when you're looking at the numbers, you're still flat, you're still improve not that much. Especially if you see that, then it's more time to recheck actually I will say. If you see that the growth of your company follows all your A/B test increases, then it's less of a red flag. Maybe you don't need to retest. You seem to be doing fine. I think that because I think a lot of the times we're like, "Yeah, all this..." Look all the list of experiments with 5%, 10%. I'm like, "But wait a second, what? Why we are not growing 30% year-on-year if we have all these wins?" And I think this happens also with product, with a new feature, whatnot. Yeah, this lift, this lift. But then when you look over time, it's not really holding together. So I think there's also, I don't know why it happens, right? But there's this mismatch between what your A/B tests are telling you and what the long-term effects is telling you.
David Barnard:
That is such a great point. It's like you can stack all these wins. "Oh, we got 10% higher completion in the onboarding step. Ooh, we got 15%."
Sara Grana:
And it should compound. It should even compound. So it should be even way more than that and it's not. So, yeah.
David Barnard:
Tracking and making sense of all this is such a challenge, but that's why you need to do it. You're not actually getting that 10% increase if you're not actually getting that 10% increase, and you're never going to know if you do these experiments in isolation and aren't tracking everything. So it was so fun to chat with you about all of this. Thank you so much for coming on the podcast. Anything else you wanted to share as we're wrapping up? I know you've just started this new role at Yousician. Any job listings you want to shout out or anything else like that?
Sara Grana:
Yeah, we are looking for people. It keeps changing. So I will just go to yousician.com careers. But yeah, it's a really fun company to work at. So helping people learning to play an instrument, that's great thing to do. Yeah.
David Barnard:
Awesome. Well, thank you so much for joining me. This was great.
Sara Grana:
Thanks a lot.
David Barnard:
Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by chat.subclub.com to join our private community.

