The Subscription Growth Formula: Churn Math, Retention Wins, and Smart Product Bets — Dan Layfield, Subscription Index

The Subscription Growth Formula: Churn Math, Retention Wins, and Smart Product Bets — Dan Layfield, Subscription Index

On the podcast: estimating the ROI of product changes before building them, calculating your subscription app's growth ceiling, and why you shouldn’t make assumptions about what is and isn’t working in other apps.

On the podcast: estimating the ROI of product changes before building them, calculating your subscription app's growth ceiling, and why you shouldn’t make assumptions about what is and isn’t working in other apps.


Top Takeaways:

💸 Every sprint has a price tag – Estimating ROI helps teams avoid spending $50K on ideas that won’t pay off.

⚾ Most wins take a few swings – Big features rarely land on the first try. Stick with ideas that show early promise.

⏳ Churn sets your growth ceiling – Divide monthly new users by churn to see your max subscriber base.

🧵 Polish beats flash – UX fixes in high-traffic flows (like paywalls) can outperform new features on ROI.

🚀 Fast teams win, not secret ones – Shipping fast matters more than stealth. Validate early, iterate quickly.


About Dan Layfield: 


✍️
Founder of Subscription Index, a blog that breaks down the strategy, math, and real-world lessons behind successful subscription products.

🧠 Dan helps startups grow revenue by optimizing retention, reducing churn, and making smarter product bets rooted in ROI.

💡 “Your company will not be profitable ever if the output of your sprints doesn’t exceed the cost of your sprints.”

👋  LinkedIn

Resources:

The Hidden Math of Churn: Why You Can’t Scale Past $1M — Subscription Index blog post


Follow us on X:


Episode Highlights:

[1:03] Over/under — The importance of estimating the ROI of your product development efforts in advance.

[6:39] Making a splash: The pros and cons of building features in order to get attention on social media or in the press.

[12:49] Sweat the small stuff: Why fixing “small” issues with your user experience can lead to big payoffs.

[19:41] Hitting a ceiling: How to calculate your company’s maximum subscriber base based on your monthly new users and churn rate.

[24:03] The long game: Accounting for long-term users (“locals”) versus short-term users (“tourists”) in your growth ceiling estimates.

[32:11] Good use: How the degree of product-market fit for your app affects your churn rate.

[37:40] User activation: Mitigating churn by providing a great onboarding experience and giving users early wins.

[39:21] Money talks: Why auditing your pricing tiers and payment processing systems can significantly bolster your bottom line.

David Barnard:

Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat. Thousands of the world's best apps, trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show. Hello, I'm your host, David Barnard, and my guest today is Dan Layfield.

After helping Codeacademy scale ARR from 10 million to 55 million and working as a product manager on Uber Eats, Dan now consults with startups looking to increase subscription revenue and blogs about his learnings at subscriptionindex.com. On the podcast I talked with Dan about estimating the ROI of product changes before building them, calculating your subscription app's growth ceiling and why you shouldn't make assumptions about what is and isn't working in other apps. Hey Dan, thanks so much for joining me on the podcast today.

Dan Layfield:

Hey, what's up David? Thanks for having me.

David Barnard:

So you've been blogging up a storm the last couple of years and I've linked to your posts a ton in the Sub Club newsletter. I've tweeted out links to your blogs, posted about them on LinkedIn and commented on LinkedIn. So I wanted to have you on to talk more in-depth about some of these things that you've been writing about. Of course, you've had a ton of great experience at Codeacademy and worked with consulting clients who you've helped with monetization, and so I want to dig into some of these writings that you've done the past couple of years.

And one of the more interesting topics that you've kind of touched on here and there is this idea of focusing on the ROI of projects. This is a really tough one. So first of all, just tell me what you mean by that high-level overview and then we can kind of dig into the specifics of how you actually determine the ROI of a project.

Dan Layfield:

Sure. I'd say it's the thing you kind of learn the hard way as a product manager. I was at Codeacademy for five years. I saw us grow from about 10 million ARR to about 55 million ARR. Obviously there's a ton of factors in that, just not the growth team's work, but I think it's a lesson you learn as kind of a young product manager that you can in theory do lots of ideas, but when you're responsible for moving metrics, you need some concept of the headroom your projects have. You can't always 100% forecast it accurately. You're on a team with six to eight engineers and a designer and a product manager. Your two-week sprints probably costs somewhere in the area of 40 to $50,000.

So it's again, I kind of always have to keep a mental math back to what's the likely output of the things we're doing on. Your company literally will not be profitable ever if the output of your sprints doesn't exceed the cost of your sprints. I think that concept makes sense to people. It's a really hard one to implement in practice. In my experience, it's not productive to try to 100% forecast the impact of every product change you're going to make, but I think the way I would think about it is if you were to divide up the work you're going to do in a quarter into the buckets of technical debt, clear down, small stuff and bugs that just has to happen in big swings.

Big swings are typically where things get expensive, where you're implementing giant new features or you're taking a big shot. I think those projects definitely need headroom analysis. You need a concept of what the upside could be for all the money you're going to put into them through development.

David Barnard:

A lot of folks go into these big projects. I'm in the middle of one right now with my inside project weather app. We've spent the last five months on a big swing, but honestly I didn't put dollar numbers to that. The hardest thing on the big swings is that you also don't know how long they're actually going to take. It's really tough to estimate those things. Let's start with the big swings and we'll kind of work backward from there. When thinking about those big swings, how do you think about putting numbers to the potential for a big swing?

Dan Layfield:

Across my career, it's always helped me to do the basic math. So we have an LTV of a hundred dollars. We think this feature is going to increase early life cycle usage and therefore lower churn and therefore increase LTV. I still think it's good to chart out the basic assumptions of how many people do we sign up per month, how many people activate, on what screen do we think people would see this feature, what percent might click on it and of that what percent might adopt it? So that's pretty simplistic math. It's not going to show you necessarily if you're right, but it will show you if your assumptions are wrong.

David Barnard:

That's a great way to think about it too because then you're actually having to force yourself to look at the numbers and we'll get into this later with some of your churn related math. But if you're only bringing in a hundred people a day or 500 people a day, you do have to understand that there is a certain amount of headroom that any one feature can get adopted. And then part of that equation I would imagine is also will this lead to greater conversion? Do we have a hypothesis that this feature will actually cause more people to convert and then will this feature cause more people to retain? How do you think about all those factors as well?

Dan Layfield:

I guess I think mostly about where you want to initially focus. So I think a mistake I see in a lot of companies, and I definitely made a ton of times earlier in my career is when you are in a company or if you were a founder working on stuff. You feel really productive kicking a lot of stuff off in parallel because if you have a project tracker, it gets a ton of lines that show a bunch of stuff in progress and the team feels really busy. But the easiest thing to lose sight of when you're working on a product is projects in progress don't do anything for your business. They only start to do something in their business when they go out the door.

So if you have big projects, kind of like chunking them up into milestones so you ship things incrementally and start to slowly test assumptions, goes a long way. I've never been at a company where the first milestone going out is something people feel comfortable with. It's almost always like hackier than you want it to be or it's not the full experience. And I think where that can go wrong is, going back to my point on focus, I think you should pick points in your life cycle or monetization product to try to deliberately increase the effectiveness of, and then you need to stay focused there long enough to take a couple shots to see if you can actually improve it.

Across most of my career, the biggest wins we found were always the third or fourth swing at something because if you ship something, you realize it doesn't work. And if you move on immediately then, unless you were completely off base. It doesn't work. Ideally you have enough tracking set up where you can see signs of life somewhere or you're running an A/B test and the variant didn't get totally crushed by the control that you realize you're kind of onto something. Because I think when you move on too quickly, it takes your engineers a little bit of time to learn that area of the code base so there's certain knowledge debt they pay down to figure out how to work there.

Same with the designers, same with the founders, et cetera.

David Barnard:

How do you think about some of these big swings impacting top of funnel? For example, my side project WeatherApp, the big thing we've been stuck on is a feature that my hypothesis is that one, I'll be able to get some press around it because it's very innovative. Two, it's a kind of feature that I think will do really well in marketing, so I think this is a thing we can finally spin up some TikTok videos, advertise it on Meta. It's a kind of feature that I think will get people's attention. Now did I do anything to validate those assumptions? No. And this is where it's easy to get stuck, but how do you think about top of funnel in relation to these big swings?

Dan Layfield:

I think you can take big swings anywhere. I think the thing at least I've always tried to optimize for is cost of learning something versus payoff. If you think something could go viral or get picked up or be splashy, it's like how hacky are you comfortable building something that can validate that without fully releasing it? Of course, lots of different types of entrepreneurship, like can you pre-sell access to something is a classic way of validating an early stage assumption or could you just use mock-ups and test that on TikTok and not get stuck behind the whole development cost.

Everybody has a little bit of a different comfort level in terms of how growth hacky you want to be to validate something. If you go too far, the product feels spammy, the product feels wildly disconnected from the actual experience. Everybody's seen mobile gaming ads where the actual game looks nothing like the advertisements, so I think there's definitely a balance to strike there.

David Barnard:

The funniest thing in mobile gaming though is that once people saw all those fake ads getting so much attention, they went and started building the games that were actually the experience that the ads portrayed. And from what I understand, some of those have done relatively well, but I think it's a great concept. I mean the tricky thing there is how much do you tip your hand to your future product? I mean, have you ever done those kinds of hacky validations and how do you get comfortable around not getting ripped off?

I mean I think that's a big concern for a lot of folks, myself included, that if I show this publicly, people just rip it off and then beat me to the punch. I think that's relatively illogical, but maybe there is some truth to that. How do you balance those kinds of illogical fears versus maybe there is some truth to that?

Dan Layfield:

Across the companies that I've worked for or consulted with, the ones that do the best are the ones that are the fastest and know the direction they want to move in. To me it's like that is the most important thing. Companies that I've seen struggle more, they spread themselves too thin too early or they take too long trying to build something and validate the concept. When I was at Codeacademy, we were the leader in that little industry of learning to code at the time, which was big at the time. We saw companies copy stuff that we were doing that I knew didn't work and we just never cleaned it up because we moved onto something else.

But there's a ton of other learn to code tools that duplicated features we built that we saw didn't work or we saw be net neutral or ineffective and they copied it assuming that we knew what we were doing, but in reality we were also testing something that we saw not work and then we just left it live because we got distracted and we moved on to other stuff.

David Barnard:

That's hilarious. So then the answer there is maybe you do that validation at the point at which you know you can deliver on it rapidly enough to not worry about that competition and then that's really just a fundamental skill of a business, especially these days with AI assisted development and other stuff. The pace of development is just moving so quickly and the pace of being able to copy something is moving so quickly, so you just need to be ahead of that as a business or you're not going to stick around.

Dan Layfield:

Definitely. I think the point on what to copy is to me a nuanced one. It's like I think you the founder or the product team or whoever you are, you should be opinionated on the types of features you should build and you should not look to competitors unless you're woefully under the baseline on what you should build next. I think once you're building a feature, you should look at the best practices and reinvent as little of the wheel as possible.

So if you're building a document finder, look at Google Docs, look at SharePoint, look at Finder on Mac, look at all the little features those things have and those are probably the baseline. Whether you should have a document finder is kind of up to you, but it's like once you're committed to a feature set, just steal all the best practices from the people who are the best at it. Might not be your competitors, might be people who do that as the core business.

David Barnard:

It's a really good point too that you never know what is actually working and what's not working when you look at things from the outside. I see this in the app industry, of people tweeting, "Oh, this app is making 200K a month. Copy these five steps." And then for some of those actually know the folks who build the app and I know four of the five points that this growth hacker made on Twitter. It's like that's not why the business is being successful. Those four things are completely irrelevant and aren't what's fundamentally driving the business. It's this other thing that you just can't even see.

Even though you can look now across TikTok and Meta and other platforms to see which ads are being shown a ton, you don't always even know whether those ads that are shown a ton are actually profitable or not, and then even if they are actually profitable for that app, would it then actually be profitable for your app? I think people lean a little too much on this on copying the wrong things without really understanding the deeper context behind them.

Dan Layfield:

Exactly. I feel like that's why I kind of write my blog in the style that I do. I get the appeal of looking at other companies and there are tools now where you can observe their A/B tests. And it's like see, they ship one thing and then another thing and they went with the other thing. And it's like it's unclear from the outside if that's true, if they calculated that correctly, if they went with something that wasn't actually the winner. Also, it's really unclear if that would apply to you, the company. So to me it's more important to look at the theory behind why these things work and care less about the exact implementations.

David Barnard:

We've chased a few rabbits here, but I do want to circle back to the other couple of things you talked about in this kind of ROI equation. One of them is the small things and how do you think about paying back the technical debt, making small changes and the ROI of these smaller things along the way versus the big swings which we've talked about?

Dan Layfield:

I think about it like an investment portfolio where you need a certain percentage in each bucket. So if you're just starting out, you probably don't have a lot of technical debt. There's probably still a layer of technical cleanup that has to happen or shuttering old features or updating libraries and scripts, et cetera. Maybe you only need 10% of your quarterly development time going to that, but you need some percent or the product starts to get stale and technically really hard to manage. If you were at a giant company like Microsoft that could be paying off 50% of the roadmap with technical debt items.

I think it depends a lot on the phase of life. I think in the smaller optimizations bucket, it's really easy for teams to get lost pursuing just the bigger things and forgetting about basic product quality, especially in certain areas like purchase flows, onboarding. There's certain parts of the application where a hundred percent of your users go through it and they go through it at times where they're not really sure if they're staying with your product. To me, those things should be polished to a mirror finish. So it's the thing I've talked about a couple places, but one of the highest ROI projects we ever did at Codeacademy is we rewrote all of the checkout page error copy.

So if you plugged in a payment and it didn't work, making sure you got a message back that actually told you what to do and didn't just say, "Invalid payment method, please contact your bank." Tell them what's wrong, tell them what to fix if it doesn't work, send them PayPal. PayPal doesn't work, send them back to a card. It literally took two days of downloading stuff stack ranking the errors, rewriting it, re-emerging it back to the code base and maybe that, let's say, very conservatively lifted checkout page conversion 1% but a hundred percent of your new revenue goes through the checkout page. So it's really like you lift the whole business's revenue 1%.

David Barnard:

So for a subscription app that would be then making sure you're onboarding through Paywall are just incredibly dialed in. How do you think about forming a hypothesis around which things need work? I would imagine tracking each page in the onboarding, getting good analytics around where people are dropping off, why they're dropping off and kind of finding the rough edges. I mean this is again something I personally struggle with.

Something I see so many people struggling with is that you're trying to build this perfect product but you're perfecting areas that don't matter. So how do you find the places that actually matter so that this 25 to 50% of your bug fixes and technical debt and smaller projects are still impactful projects versus just polishing something that doesn't matter?

Dan Layfield:

I think it's always a mix of art and science, so I think I always want the highest level of analytics setup I can get to. I think it's tempting to just view funnels, especially early stage funnels as you look at the place with the biggest drop off and you start to intuitively work there, but inherently some of the parts of the funnel will have a big drop-off. I prefer to work bottom up, so go from the highest intent people back up. It's really hard to convince people of stuff. So you should start with the people who are already convinced. In purchase flows it's always the people who try to click payment and it doesn't work. For some reason that's under your control.

So if you're selling through an app store, this matters less, but if you're selling on a web checkout page, this matters a lot more and go through areas of bugs, confusion, dead end. I think a really good surveying technique is right after purchase. You get one shot at asking an open-ended question of, was there anything about the purchase experience that didn't work for you? Was it confusing? Or is there any questions you had about this product before purchase that you couldn't find an answer to? And it's like you'll never get a flood of information through that, but you get 2, 3, 4, 5 things a week that start to build your intuition around what could be wrong in the funnel.

I think another thing that helps a ton is just have people who've never seen it before go through it with fresh eyes. So these could be your friends, these can be usertesting.com, people like you, the person building this, have stared at this thing for so long. But frequently when I consult with clients, it's like I can see really obvious things because I've never seen these flows before, but the person who built it knows this is version three and this part looks weird because this was done between version two and version three and there was a really good reason for that at the time, but to a fresh customer, they don't know.

David Barnard:

If you're five people and everybody's kind of focused and working together, that looks really different, but if you're a 50 or a hundred person company... I saw a tweet I think just this morning saying if you're the CEO of a SaaS company, you should onboard with your product at least quarterly because teams underneath you are changing things rapidly enough that you don't know what's going on. So maybe even part of your practice as a larger company should be having the CEO having another team go through the onboarding and looking for that. In addition to people with zero context outside of the company looking at that on a regular basis.

Dan Layfield:

Exactly. I think a classic place to look for things to fix is the lines between teams. So if one team runs the ad and another team manages the landing pages, there's no guarantee there's any continuity between the ads and the landing pages. It's a classic place you find a low-hanging fruit if team A owns this step, team B owns this step, team C owns this step. No one's paying attention to the user gap, to the gaps, and experience between those things because each team gets kind of focused on their area totally naturally.

David Barnard:

And the app stores, it's really easy for those kinds of things to happen as well. If your screenshots weren't changed for the last six months and don't even mention the feature that you now know is one of your higher converting features. That needs to be maybe screenshot one or two in your app store page if you've built this new feature and think that that's driving a lot of your conversions, but it's easy to lose sight of that full funnel experience.

Or like you said at bigger apps, it's maybe that the ASO team is working so independently from the monetization team that they don't even realize that this new feature or some change or some A/B test that one is driving a ton that they could pull up into the screenshots and other marketing copy and things like that.

Dan Layfield:

Exactly. And I think when you're a product person or designer or an engineer working on one feature set, whenever you do customer interviews you want to ask them about that feature set, but to the user it's all just one experience. Everything from your email marketing to your ads to your app store page, to the product itself, to the things they hear about, you're just one blob of opinion. So to them it's all one experience. Even though I'm really curious about, when I was at Uber, how did they like the ranking of the homepage, but they don't think like that. They think of, could I find food? Did it show up on time? And was it good? Even that journey is across thousands and thousands of people who work on all that stuff together.

David Barnard:

It's crazy. The next thing I want to talk about is you most recently wrote a post about this idea of having a growth ceiling and that's the fact that depending on the number of people you have coming in and your turn rate, you can actually calculate how far you can grow based on that. And tell me about this concept and then we'll go through how that works in practice.

Dan Layfield:

Definitely. There's a couple really good rules of thumb to know within subscription products, probably the handiest is one divided by your monthly churn rate as a decimal point is your average months of retention. So if you have a 20% month-over-month churn, that's 0.2, 1 divided by 0.2 is five, so your average user will be around for five months. So you obviously see the power in that equation of reducing churn dramatically increases the length of retention that we have. Another implication of that is every time you cut churn in half, you double that number. So if you divide one by 0.1 you get 10. So it's not easy to reduce churn that much, but reducing churn is super, super powerful.

An equation that we didn't even understand at Codeacademy at the time is if you take the number of users that you're acquiring per month and divide that by your month-over-month churn rate, you get the ceiling of users that you can keep at that level. So if you have 500 users coming in per month and you're churning 10% of them per month, 500 divided by 0.1 is 5,000. So at 5,000, the number in which the inputs will equal the outputs. So unless you drop churn or you pick up the acquisition, your growth ceiling basically is you'll never be able to grow above 5,000 users.

So this is something that I commonly see in subscription products. Before you're at that threshold, your user-based numbers will just keep increasing and it's attempting to think that your product work is going really, really well, but we should really keep an eye on is churn the numbers that are upstream of churn and your acquisition numbers. Because if your growth ceiling is 5,000 people and you're at 3000 people, you're just going to kind of drift up from a subscriber-based number regardless of your actions. Assuming churn and acquisition don't change. So it's easy to assume it's going really, really well, but you have to be very precise with what you're tracking.

David Barnard:

I see this all the time and we've talked about on the podcast before, there does seem to be a ceiling for a lot of subscription apps in the million to 10 million range, sometimes 10 to 20 million range where you kind of hit that and it's kind of cool that you're mathematically describing the ceiling, but it does seem to be that without some new growth lever, without some big improvement in churn, you just start churning out the same number of users you're bringing in and you just kind of hit the ceiling.

So you're growing, growing, growing, and it's looking good and you're drifting up maybe even you're not drifting up but you're growing really rapidly but then you just kind of hit this ceiling and equating it to churn and being able to calculate it I think is really powerful. We'll put a link to this blog post in the show notes, but how do you think about that for annual because it's not quite as easy to do the math on that.

Dan Layfield:

I mean the month-on-month churn rate is really a blended average of all of your churn across monthly and annual plans. The way I about that for I guess in most subscription businesses is you should look at this cohorted. So for every month of users that sign up, what is their month 0, 1, 2, 3, 4, 5, 6, et cetera, churn rate, and you should look at that across both of your plan types. I think the best way of getting ahead of churn is not basing retention off of payments, it's basing off of some sort of core action in the product. So if you're Slack, for example, and someone signs up for a monthly plan.

They might send zero messages for three months and just forget you're charging them $19.99, but they're probably churning eventually. So if you really want to average retention at a product, figure out what the core action is. So in your weather app, it should be, I expect you to come in and check the weather three times a week or four times a week or one time a week or something like that because that will always... Improving core activation and the core habit will always improve churn. Once someone is in the mental space of churning there are things you can do, but it's hard. There's a couple mitigating steps you can take, but it's not nearly as effective as fixing it upstream.

David Barnard:

I want to dive a little bit more into term mitigation and you've written a lot about that. I think that's kind of one of the focuses of your blog is churn and retention. But before we dive into that, I did want to ask how do you factor in the kind of long tail? Because I've talked to enough apps in my role here, I've done office hours for years and then having my own apps, and then the thing is most apps, retention doesn't go to zero. It goes to some percentage of that initial cohort. So if you're looking at, let's say year one, you acquire 10,000 subscribers and 60% of those churn, so you have 4,000 subscribers.

But then by year two you're not churning another 60% of subscribers, you're maybe only churning 30% of those subscribers. And so looking at my own app, I still have 10% of monthly subscribers and I think maybe even a little higher of annual subscribers from eight years ago, but 10%, 8%, that's not a really fat tail. A lot of the apps that I've talked to that are doing really well and building a fantastic business, that line flatlines more 20%, 25% where you churn a higher rate the first year, a lower rate to second year, and by the third or fourth year you kind of have... Eric Crowley has this idea of tourists versus locals.

You're going to have people who come and check out your app and it's not a good fit or it's a good fit for a short amount of time, but then they move on and then you have the locals being the people who your solution becomes part of their daily, monthly, weekly life and they're just going to stick around for years, maybe decades. So how do you factor in that long-term retention in this equation of what your growth ceiling is?

Dan Layfield:

I take the month-over-month average factoring all the plans to calculate your just month-over-month churn rate. Again, month-over-month churn rate is not as helpful as knowing the cohorted and plan-based churn levels. But I think if you have a retention curve that flattens at some point, it's really good. You always want it to flatten higher, but if you don't have it flatten ever, if those curves go to zero, it's really, really hard to grow your user base and you're going to have a higher monthly turn.

So you can't stack a big user base if a hundred percent of your people after month seven go to zero, because you'll have to acquire so many people to get that, and you always have a bucket that takes a little while, but a hundred percent of the things you pour into it leaks out. I mean, all subscription products are leaky buckets, but you want that leak as small as possible. When I think about why that happens, to me the least discussed part of churn management is the underlying use case you solve for. So your retention length will be really dictated by how long the user has the problem you're solving for and how much they need your solution to do it.

So if you think of cell phone plans, I don't know how long I've been on Google Fi, but probably eight years, nine years. If they don't screw anything up, I'll probably be with them 25 years because I have a daily need for their product and the workarounds and backups to them, none of them are compelling to me and I realistically need a cell phone. So my guess is cell phone retention plans year over year retain 80% of people, 90% of people, maybe minus some switching, maybe I'm off, but you'd have really, really high retention. Contrast that with meditation apps where meditation is a thing you don't really need an app for.

You'll probably either learn it and like it or realize it's not for you relatively quickly. So the only way you can scale Headspace or Calm is you need millions of people or you need a plan structure that locks them in.

David Barnard:

I think we're seeing that a lot with AI apps right now, is that Studio Ghibli converting your avatar and pictures and stuff. OpenAI just blew up with that, which I don't think they even expected, but Sam Altman's been tweeting about their servers melting and adding a million users in an hour. That may be the fastest product growth. They were starting from a massive base. I mean the ChatGPT app, it's already massive and they already have a massive subscriber base and then to be adding a million users in an hour was just insane.

So they had this viral moment, but to your point, if that's the primary use case, people are going to churn out because they're not going to be converting their photos on a weekly, daily basis and you need other use cases for that to be a sustaining subscription for OpenAI. I mean you would hope that people get in and start seeing the utility and using it for a multitude of things, but I think there's a lot of AI apps that blow up and then there's not that kind of sustaining use case.

Dan Layfield:

I think you kind of hit on the nail on the head of if you don't have a sustaining use case, one, it might not be a good fit for a subscription product or it might be not a good fit for a subscription product for most people. So you can see AI headshot generation being great for PR people or photographers or heads of HR as a company who want everyone in their company to have the same headshot format or whatever. So there might be a subset of personas where those things work for, but if you don't tackle a long-term recurring use case and provide value at a cadence that exceeds what someone wants and therefore they're willing to pay for it. It's really, really tough to build a subscription product.

I think if you were to bucket the types of subscriptions that people pay for, you have the really long-term stuff like mortgages, financing products, loans, rent, cell phones, healthcare, where those problems never go away effectively and you can build giant monopolies in those spaces. You then have the short to medium term use cases of dating, fitness, dieting, language learning, gaming, where those are things people stick with for a couple of months. A subset of them will stick with them for a long time, but really that's because they like your product a lot and they don't switch, but the underlying need fades away.

David Barnard:

Although in gaming, it has been interesting to see Fortnite and some of these other companies, the way they've created this sustaining use case is that entertainment is a long-term use case, but people get bored and then go try other MMOs or other ways to get that entertainment in the way some of these big game companies have been solving for that is with season passes and things like that where the product itself is changing enough to bring people back in on a regular basis with these new characters, new game modes, new collaborations with big IP and things like that.

So even if your use case is short term, a perfect example with ChatGPT is a bunch of people came in to convert their photo because it looked really cute, but then there are a ton of other use cases for ChatGPT. And so then as they're able to expose users to those other use cases and get them deeper into the product, there are those sustaining use cases that will be used on a more regular basis. And those are the kind of things that even if you kind of start out of the gate with a explosive viral success, this is what you should be looking for is that next season of Fortnite thing where you keep building the use case or keeping it fresh enough to keep people coming back.

Dan Layfield:

Definitely. I think the recipe that works in the shorter term use cases is you need massive audience size. A surefire recipe to die as a subscription company is short life cycle, small audience, where you just won't be able to acquire enough people and this thing will never grow. I think kind of what you hit on with the Fortnite season pass is what I would call a companion subscription product. So Uber has one of these, DoorDash has one of these, Lyft has one of these, where you're in a long-term use case, but there's a high ability to switch between providers. So they end up building these past products really as a retention tool.

So if I'm paying Uber Eats, I forget what their membership product is, but we'll call it $9.99 a month in the US, I'm way less likely to use DoorDash and because Uber makes its money through the core business, you have a lot of flexibility in terms of how you price the subscription product, but really what it does is stop you from switching. I imagine Fortnite thinks of it the same.

David Barnard:

Totally. In that blog post on churn, you go through five kind of key areas of what drives churn, and so you already covered that first one of how long will users have the problem you solve? But the next one is, how strong is your product market fit?

Dan Layfield:

It's a great question. I think like you said, the first thing that will dictate overall retention is how long do people want to be solving the problem that you ultimately solve? The second one is, how good are you at solving that problem, which I would call product market fit. To me, there's two kind of ways of measuring product market fit. There's the classic Sean Ellis hack and growth survey where you survey a bunch of users and you ask what percentage couldn't live without the product and you want 40% or higher the people that couldn't live without the product.

I think that's really good for earlier stage companies when it's tough to know your retention numbers and you can survey people relatively quickly, get data back relatively quickly. I think the better way of measuring retention is trying to chart out the percentage of a cohort that keeps doing the same action month over month. So for your app it'd be like what percent come back and check the weather X times per week? And if you graph that like you're saying, you probably see it flatten, which means for whatever segment of that user base that is, you have product market fit.

So the other thing that I keep in mind is there's degrees of product market fit, so there's kind of this works for me and I like it, but if given a better alternative, I'll switch or this thing is the best thing I've ever used for this. And I think the way of looking at that is if as best as you can, segment by persona or some concept of persona. You should ideally see good retention for one of them, which means you're kind of onto something.

David Barnard:

And using my weather app as an example, it kind of hits your earlier point about use case duration. Even though it's a ridiculously competitive space, even though it's really hard, one of the reasons I keep at it is that weather is one of those indefinite continual use cases where people will check the weather for probably the rest of their life. And then for me thinking through this as you were saying that, the fact that my long-term retention is in those high single digits, maybe low double digits of eight to 10 to 12%, that's probably a good sign for me of product market fit that I have some level of product market fit but not great product market fit.

If I had great product market fit because the use case is indefinite, with great product market fit should be landing closer to 20%, maybe even higher than 20% in that kind of long-term retention. And that's maybe a thing to look at for me to watch over time and for people listening to go look at your long-term cohort retention. Actually in the RevenueCat dashboard, this is where I always look at of course, but we have that cohorted retention where you can see across all time what's that retention curve look like?

And then you can go and cohort it and look at different years, different date durations and look at is that improving over time or is that getting worse over time? Did a new feature change that in any meaningful ways? But I hadn't thought of that until you said that. That long-term cohort is a very direct sign of the level of product market fit that you have.

Dan Layfield:

If you're at an earlier stage company, you might be at a long-term use case and it might take 14 months to see those curves flatten, and if you're only at month six of building an app, that's tough. But I think if you look at each cohort coming in, you should see the early stage months stack up to be slightly higher. That will depend on where you get users from. It's a very common thing that companies will go viral and you'll see a flood of people come in, but they're not really your persona and that cohort retains terribly.

So even though it's good for the metrics overall, the question is always, of the people you're either paying to acquire, building SEO content for, using word of mouth funnels to acquire, are they the right type of person? You always care about the long-term LTV of the monthly cohorts that you retain.

David Barnard:

This goes back to our earlier discussion as well about not knowing what is and isn't successful for other companies. So when you see an app go insanely viral and you're super jealous because you're still just running Meta ads to get people into the app or whatever and you're like, "Oh, if my app could only go viral half the time." Probably more than half the time, it really isn't a great way to acquire users unless you have those other things stacked up and we're going to keep going through them. But you don't get to see those numbers.

You only get to see the TikTok that does 10 million views. You don't get to see how many of those actually convert into paying users. How many of those paying users stay subscribed and was that actually a good cohort? And the answer is often it isn't. So another thing where you can look at the numbers but completely misunderstand what's actually going on behind the scenes.

Dan Layfield:

You care about stacking up a user base that will pay you for a long time and pay you well. As best you can, you should be guiding product development and growth of the earliest stage indicators of that. So like onboarding success, activation success, certain personas, segmenting audiences, and then feeding that back into whatever acquisition machine you have. So if you acquire a thousand users a month, but 15% of them are, let's say, coffee baristas and coffee baristas want to know the weather for XYZ reason, they're the best fit.

There's nothing wrong with other types of people coming towards the application, but if you're going to spend money and time, you should focus on where you're going to hit the longest term payoff.

David Barnard:

Totally, and that actually brings us to the next point in your blog post is that how well you activate users is a huge part of retention.

Dan Layfield:

So I say after you have a level of product market fit, the big question in activation is how many people experience the value of the product early on? So it's like the classic milestones within product development to track [inaudible 00:38:45] kind of like sign up, how many people registered set up, how many people take the steps necessary to receive value from the product? So that could be setting up your profile or turning on location services so you can see the weather or something like that. Then there's kind of the first aha moments of what percentage of people see value.

"So I was going to go on a hike and your app told me it was going to rain and I didn't go on a hike," or "I thought it was going to be cloudy today and you notified me of weather change." And that's a great early happy experience for users and then first 30 days activation and then long-term retention. As best you can, especially the early stages, you want to make sure people are experiencing the win on the application as fast as possible. I think if you look at any classic benchmark reports, especially around trials, people make trial whether I will or will not start a trial experience almost in session zero, probably within the first 10 minutes on the application may be quicker.

So it's like being really clear in what you do and guiding people to do that in the first session is really, really helpful. There's things you can do around life cycle marketing that they don't. In the first session there's ways of guiding them back. But going back to headroom analysis, really good email campaigns have a 30% open rate and a net 2% click-through rate. So once you lose them on a product, it's tough.

David Barnard:

The next one is payment processing and for subscription apps more heavily reliant on the app stores, maybe that's less of an issue, but that does kind of encapsulate that entire conversion event of how many people are you actually getting to convert? How do you think about that?

Dan Layfield:

Definitely less of a problem for app store-based apps. So even though the 30% fee that Apple and Android put on applications, you do get a lot out of the box for that. You get a high converting checkout page, it handles currencies for you. It's really good at payment processing, it's really good at payment retries. You do do a lot of stuff. If you build off the web, this is more your problem. So the number one thing I'm always looking at or the two things in payment processing rates is after someone clicks pay on your checkout page, what percentage of time do you collect the money?

Almost no one tracks this the first time, but it's the highest intent people you can possibly get. A certain amount of those fails will be valid, their card won't have a balance, but there's a surprising amount that if you look into the details, you can try to find ways of optimizing for. A classic one is US-based companies who charge around the world and they still only use Stripe or US-based gateways. So there's regions like India where the US-based payment system is not as effective, same as China. The bigger you are, the more headroom there is in starting to set up secondary payment providers or getting a subscription manager.

For app-based stuff and for smaller companies, this doesn't apply as much at all, but I think the philosophy of it takes so much effort to get someone to try to click pay, lose as few of those people as possible. The second one is recurring payment success rate. So the classic things to do here is the classic Dunning and email notifications when payments fail. I think again the app store handles most of this stuff for you. I think in the app store you still might be able to detect this and customize their emails. To get people to update their payment method, I've always seen...

Again, it's not a massive uplift, but there's a value in spending like two days copywriting those emails because this is going to impact a hundred percent of your people who fail payments and the lower your churn rate gets the more likely most of your churn is payment based churn. So if you get your churn rate all the way down to 2%, 50% might be payment based churn.

David Barnard:

And for apps, a lot of apps are getting really good at when people click your CTA. And this is always kind of a tricky thing with the app stores and honestly any CTA is that you can get more people to click a button that's more obtuse when it's just start free trial and the actual price is hidden somewhere in the fine print. You can juice the number of people who are going to click your CTA, but then when the app store sheet pops up and tells you the actual terms and the actual price, how many people are clicking your CTA and then dismissing the payment sheet and not completing that.

And that's probably a multifactorial thing of the less clear you made the pricing on your paywall, probably the higher rate of people who are going to go ahead and cancel the actual payment because they didn't know what they were signing up for. And then it's always kind of a balance of how do you balance how many people are clicking on it versus how many people are actually starting the free trial. I've actually been experimenting in my app and I haven't seen many people do this, but when people click the CTA and then cancel the payment, I actually pop a little modal now that gives them a non-recurring free trials, a reverse free trial.

So I offer a reverse free trial to customers where they get seven days completely free that doesn't renew. And I haven't seen a ton of success with that, so I'm not going to say this is a magic bullet, but I think those kinds of experiments are what you're talking about is that when somebody clicks your CTA, that is high signal. Again, maybe lower signal if your CTA is confusing or not clear on the price and things like that, but it's something you want to be experimenting with and that's a really great place to experiment. A lot of apps now when you dismiss the payment sheet, they'll offer a discount immediately and kind of try and win you back that way immediately.

And that can be incredibly effective. I mean a lot of apps are doing that now, although I have heard Apple has been hassling some developers about that, that it's kind of a form of price discrimination, which is understandable if you're just randomly giving people 50% off because they dismiss a payment sheet. That's maybe not the best customer experience, but it can be incredibly effective. And so it kind of goes to what you're saying is that that payment section, whether on the web or in the app is a place to spend a lot of time and be thinking a lot about. The last one on your list is how good are you at winning people back once they leave.

So you kind of already touched on it that this is actually really hard and maybe that's why it's at the bottom of your list, but what are some of the strategies you've seen be successful here and then realistically what do you think people can expect? And then kind of back to the whole ROI discussion, maybe the ROI on this is so low that especially for an earlier stage app when the numbers are just low, it's just not a place to invest a lot into.

Dan Layfield:

So I think if you look at really big subscription companies, the New York Times is one, CLEAR is another one, the US travel-based app. In their cancellation flows, they're always taking every shot they can to try to win you back regardless of what you click on the survey. So if you click, it's too expensive, they'll give you a temporary two-month discount. Zoom does this too. If you click on, "I had a negative experience." They'll connect you with customer supports. If you click on, "I'm too busy right now." They'll let you pause the subscription. I'd say this tactic set is not the cheapest to implement because typically it involves bespoke logic in your payment processor.

So if you're web-based and you use Stripe, I don't think Stripe has any out-of-the-box features that help you here. So you have to custom manage all this logic, but there's a reason all the giant companies do it. I've implemented this a couple of times across a couple companies. I'd say the big three that I've seen work is pause, discount and kind of connect with support. Pause works the best if it's a temporary habit. So we implemented this at Codeacademy, people don't want to learn to code every day of the week, unlimited. They go through periods of learning.

So if you let them pause and let them come back when they're less busy, they're more likely to retain in the long-term. If my mortgage offered pause, I don't think anyone would use it because either you need your mortgage or you don't. The underlying need doesn't really go away. Same with discounts. I'd say the two discount strategies that I see work is temporary discount. So we'll keep you in the same plan structure for three months at 50% off and then we'll auto readjust you back or we'll drop you down to a lower hidden tier of the product.

At least when I went to cancel Zoom, I think I was paying 15 bucks a month for Zoom and they offered me a $7 a month plan, but it's only five calls and the calls can't be longer than 40 minutes or something like that. So instead of losing you completely, they let you shift down. I think the risk here is that you should keep an eye on the percentage of your users that go for those discounts. Like most what I would call growth tactics, they get widely known. So years and years ago, it became known in the internet that if you went and added a bunch of stuff to a checkout cart and then didn't purchase out, most of those companies would then just email you a coupon code to bring you back.

And that works until most of people know to do that behavior and just everybody recoups the discount. I think this tactic set is going to go in the same direction. That said, when I've implemented this, I've seen between a 10 and 20% drop in churn. Depends a little bit on the use case, but it adds up to be material. There's a reason all the giant companies do this. It's not the world's cheapest tactics to implement, but definitely rounds the sharp edges off your churn numbers.

David Barnard:

The complexity piece is a tough one. I mean, we don't talk about revenue a ton on the podcast, but it's something we are trying to solve right now, specifically with our customer center product where you can actually drop a button into your settings or wherever you want to put it, of managed subscription or whatever, and then when people go to cancel, you can initiate a survey. And so by building out this tooling, we are trying to remove some of that complexity.

That's maybe a point for look for the easy wins in those flows and look for tooling and other things that can help you get those easy wins versus spinning up a whole project where you're spending months on this, as we discussed earlier in the podcast, where there's so much other low-hanging fruit that you should be focusing on that this is maybe not where you should do a three-month massive project to build out this really big custom internal tool until you're further along where the number of people you get back will be enough of a meaningful impact to justify that three months of building and whatnot.

Dan Layfield:

Definitely. I think it's another lesson from 10 plus years in product now. It's like the complexity of the access reel. It's like the north star of your development should be protecting future velocity. So it's like you can do things now that make things incrementally more effective, but you have to manage this code base forever. So a classic tactic that every subscription company eventually does is geobase pricing. So you realize the Nordic countries have the highest willingness to pay along with Western Europe and the US and Canada, and there's a second tier of companies that can pay less, and then there's a third tier of companies that will pay the least and you start to localize price level in addition to currency and what appears on the page.

That tactic almost always works in making your total user base produce more cash, but also you have to then manage 10 pat price packages forever. So every time you list pricing in your ads or in email or in your customer support docs, you now have to manage this complexity forever. So it's effective, but you should squeeze, in my opinion, the easier tactics out first. But we did it at Codeacademy. It's effective, but you can't unring that bell, and now when you do price changes, you have 10 price tiers to think about and currencies change in fluctuation to each other.

So you're probably pegging the FX rates initially, but then how do you manage the change? It's just another thing you have to pay attention to.

David Barnard:

There's so many things. As an indie, I mean, I've been very fortunate just working on the app stores of so much of this complexity is abstracted away by the app stores, but I haven't done regional pricing yet. And part of it is exactly what you're saying, and this is where it's tough to manage that complexity over time. Even price testing on the app stores, you end up with a bunch of different skews and you can end up with different subscription groups and it can just become a huge mess.

And you got to factor that into the ROI calculations of how much complexity does this add to the business long term. Pricing tiers, we've talked about a lot on the podcast. It's great to introduce a new pricing tier, but that's a lot of complexity that now you have to support forever, the multiple tiers, what features get unlocked, what features don't, but it's worth doing. It's just figuring out the right time to do it in the stage of your business and everything.

Dan Layfield:

Exactly. I think Codeacademy introduced a tier above our normal tier after I'd left. I think we were already north of 50 million a year before we hit a second tier. You can build as many pricing tiers as you can make great products, but the distance between good and great is way longer than people think. Your monetization systems as a whole are really just a measure of how much of the value your product produces that you capture in money. So typically the best thing is you make your product as strong as possible, then you figure out how much you want to monetize.

But I think it's really easy when you start making money to make the monetization system the goal in itself and lose track of the... You just collect a percentage of the value people feel in your product, and there's definitely tactics and ways to make that more effective, but if the product isn't increasing in value, it's really tough to raise your prices.

David Barnard:

Well, I think that's a great place to wrap up. It all does ultimately come back to product. You have to build something that's actually delivering value for folks, and then especially as a subscription app, you have to create products that deliver value over time where people just aren't going to stick around. Well, anything else you wanted to share as we wrap up? We will share links to your blog and we'll share links to the two specific blog posts that we referenced today. But anything else you wanted to share as we wrap up?

Dan Layfield:

I think that's the main thing. I think in summary, we learned a ton of hard lessons. Codeacademy took us five plus years to figure all these things out that we just talked about in an hour. I think what I'm trying to do is write as many of them down as I can on my blog at subscriptionindex.com. I'm in a place where I'm not really scaling the consulting business because I'm kind of at capacity. So I'm literally writing down everything that I see to be effective on the blog. So hopefully people going through this journey now don't have to take all the bumps and bruises that we did figuring out how to scale your revenue.

David Barnard:

Awesome. Well thanks for sharing some of those learnings today, and it was fun to get the more nuanced take. I mean, you can read those blog posts in the past, reread them leading up to this recording. It's great to get the kind of behind the scenes. I read those in 10 minutes and so maybe 15, 20 minutes total reading of the two blog posts. Maybe less actually, but we talked about it for over an hour, so there's just so much to it. So thank you for coming on the podcast and sharing the more in-depth from the blog posts.

Dan Layfield:

Of course. Happy to.

David Barnard:

Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by chat.subclub.com to join our private community. Welcome to the Sub Club podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world. To learn from their successes and failures. Sub Club is brought to you by RevenueCat. Thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.