On the podcast: how to effectively use benchmarks to aid decision making, the limitations of benchmarks, and why even the best companies aren’t top quartile in every single metric.
Top Takeaways:
📏 Benchmarks are a starting point, not a roadmap
Treat benchmarks as directional indicators to uncover growth opportunities and prioritize actions, but don’t chase them blindly. They work best as tools for identifying areas to explore rather than metrics to perfect.
🏆 Focus on strengths over chasing perfection
It’s unrealistic to aim for excellence in every area. The most successful companies lean into their strengths, improve key weaknesses, and focus resources where they will make the biggest impact.
⚔️ Beware of misleading benchmarks
Not all benchmarks are helpful. Poorly sourced, overly generic, or irrelevant data can lead to wasted effort or misguided decisions. Use benchmarks that are specific to your category, geography, or growth stage.
🔍 Metrics only matter with context
Numbers on their own don’t tell the full story. A high churn rate might be fine if you acquire users cost-effectively and retain high-value customers. Metrics need to be interpreted with a deep understanding of your product and target audience.
💡 Data is powerful, but intuition seals the deal
Data highlights where to focus, but the most effective decisions come from pairing metrics with experience, intuition, and a clear understanding of your customers. This balance of analysis and instinct drives smarter, more impactful strategies.
About Phil Carter
👨💻 Growth Advisor at Elemental Growth, a consultancy dedicated to scaling consumer subscription companies through actionable benchmarks and strategic insights
👥 Phil Carter is committed on empowering consumer subscription companies to achieve sustainable growth by leveraging benchmarks, refining growth strategies, and identifying key opportunities for value creation, delivery, and capture.
💡 “Where people get in trouble with benchmarks is they try to make them the end-all. be-all right. They try to do more with them than they really should be.”
👋 LinkedIn
Resources:
Follow us on X:
David Barnard - @drbarnard
Jacob Eiting - @jeiting
RevenueCat - @RevenueCat
SubClub - @SubClubHQ
Episode Highlights:
[1:37] The Subscription Value Loop: Phil introduces his framework for driving sustainable growth through value creation, delivery, and capture, and how it applies to subscription businesses.
[5:52] Benchmarks as tools: Phil explains how benchmarks are a directional tool to guide decision-making and identify growth opportunities rather than an end-all, be-all.
[13:07] Judging good ideas: The team discusses how great execution relies on judgment and filtering good ideas to focus on what moves the business forward.
[20:53] Using the Subscription Value Loop: Phil shares how the framework acts as a diagnostic tool for spotting bottlenecks in client businesses and setting growth priorities.
[24:47] The impact of pricing and free value: Phil describes a fitness app’s challenge with over-delivering value for free, resulting in low subscription conversion rates and pricing adjustments.
[30:26] The power of subscription retention insights: Phil explains how understanding differences in retention between annual and monthly subscribers can shape pricing and product strategy.
[36:32] Interpreting benchmarks through context: The hosts discuss how benchmarks differ based on the business model, user acquisition strategy, and market dynamics.
[42:46] Paid vs. organic growth strategies: Phil underscores the risks of being overly dependent on paid ads and the value of diversifying acquisition through organic channels.
[47:18] Value capture and monetization: Phil explores strategies for optimizing conversion rates, pricing, and paywalls to increase revenue capture from free users.
[55:45] What’s next for the Subscription Value Loop Calculator: Phil shares plans for enhancing the tool with better data, new filters, and expanded benchmarks in future versions.
David Barnard:
Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat.
Thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at RevenueCat.com. Let's get into the show. Hello, I'm your host, David Barnard, and with me today, RevenueCat CEO, Jacob Eiting.
Our guest today is Phil Carter, an independent growth advisor and angel investor, focused on helping consumer subscription companies scale. Phil spent the last decade as a VC and product leader at companies like Faire, Quizlet and Ibotta.
On the podcast, we talk with Phil about how to effectively use benchmarks to aid decision-making, the limitations of benchmarks, and why even the best companies aren't top quartile in every single metric. Hey, Phil, thanks so much for joining us on the podcast today.
Phil Carter:
Yeah, thanks for having me. Always a pleasure.
Jacob Eiting:
Phil's back. Oh, wait. Sorry, I interrupted, David. Do my intro.
David Barnard:
Jacob, always nice to chat with you as well.
Jacob Eiting:
I still can't follow directions, but I'm here. So I'm excited, let's go.
David Barnard:
So Phil, we are going to talk today about your Subscription Value Loop Calculator. And the reason I wanted to do this episode and wanted to get this episode out now, is that me and my colleagues are hard at work on the 2025 State of Subscription Apps Report, and we're going to share a ton of benchmarks.
And I think people always are a little, I wouldn't say confused, but we get a lot of questions like, "What does this actually mean? What do we do?" And I think what we're going to talk about today, is going to set the context for listeners of the podcast, anyway.
I hope everybody, whoever downloads the report goes back and listens to this, but it's going to set the context. What do you do with benchmarks? How do you actually make good decisions from all of this data? So I love the calculator you built and excited to talk about it today. So let's just kick off, what is the Subscription Value Loop Calculator?
Phil Carter:
Sure, yeah. Well, first of all, benchmarks, very polarizing topic, super controversial.
Jacob Eiting:
It depends if they're good or bad.
Phil Carter:
It's true, very true. Well, that's why they're controversial, because a lot of them are bad. Yeah. So maybe to set some context, so I think the last time I was on this podcast a year ago, we talked about the Subscription Value Loop.
Which is basically this framework I've developed, that posits that the best consumer subscription businesses are able to generate sustainable, compounding, long-term growth through three steps, value creation, value delivery, and value capture.
But the very next question that I get from a lot of consumer subscription leaders is, "Okay, this is a helpful framework in theory, but how do I actually apply it? And how can I measure the performance of my Subscription Value Loop, and understand how I'm performing relative to my peer set or my competitor group? And where I have the biggest opportunities to improve and grow faster?"
So the idea behind the Subscription Value Loop Calculator was, "Let's compile a set of the most important growth benchmarks within each of those three steps, value creation, value delivery, and value capture. Ideally, let's enable leaders to filter it by important variables like category and performance tier, eventually geography, so that you're making apples-to-apples comparisons."
And then let's use that information to help those growth leaders more efficiently allocate resources against the growth opportunities where they have the most upside. So that's the idea behind the tool. I will say right up front, this is version 1.0, and you all know this because I partnered with RevenueCat to build it.
But I think it's already better than pretty much everything else out there that I've seen, in terms of getting all these metrics in one place and allowing you to get a directionally accurate view of your performance.
David Barnard:
And you already mentioned it, but where did you get all the data for this, mostly RevenueCat? And to Jacob's point, good data, great data.
Jacob Eiting:
The best data.
David Barnard:
The best.
Jacob Eiting:
You wouldn't believe the data we have, it's great.
David Barnard:
But what other data sources did you use? And then how did you pull it all together into the calculator?
Phil Carter:
I had had this idea in my head for some time, ever since I came up with the framework, the next step had always in theory been, "Let's go quantify it and let's help people look at their data and compare it against other companies." The problem was where do you find the data?
You need thousands and thousands of data points if this is going to be at all reliable. And that's part of why there just aren't many good benchmarks out there. Because there aren't very many sources of data that are large enough and reliable enough that you can build accurate benchmarks.
Jacob Eiting:
And disinterested in some ways, like Apple and Google's datasets, well, one, they're not going to let them out.
And then two, they have a bias about how they think the world should be to serve their needs. And it's not necessarily always 100% aligned with people building subscription apps.
Phil Carter:
Totally, yeah. So this past summer I talked to Rick, the CMO at RevenueCat, and basically shared this idea with him of, "I've got this framework, I want to quantify it, I need data in order to do that. Would RevenueCat be interesting in partnering with me?" So that's what we did. We pulled data from the same dataset as the 2024 State of Subscription Apps Report.
So more than 30,000 subscription apps that use your all's SDK, I think 290 million subscribers and almost $7 billion in subscription revenue represented by those apps, so that's a pretty large dataset. It can always be bigger. That's a pretty good starting point. So in partnership with Rick and a couple other members of the team at RevenueCat, we pulled together V1 of this Subscription Value Loop Calculator.
And we can get more into what the specific metrics are and what the numbers look like, but that's where the data was sourced. The one other caveat I'll make is for the most part, we were able to get all the value creation and value capture metrics we needed from RevenueCat. Obviously, RevenueCat doesn't get as many of the value delivery metrics.
Cost per install, cost per trial, cost per subscriber, blended acquisition cost, paid acquisition yet. So to fill that gap for this first version, we ran a one-time survey where we ended up getting almost 600 responses across all sorts of different geographies and categories. So it's a relatively broad dataset, but much smaller sample size.
I think for future versions of the product, we'll continue to improve on it. And one of the ways I'd like to do that, is by partnering with an MMP like AppsFlyer or Adjust that can fill in some of those value delivery gaps.
David Barnard:
And then we've already started talking about it, but the next thing we should talk about is why benchmarks?
We already talked about how bad they can be, how much the source of the benchmarks is important, but how do you think of benchmarks as being helpful?
Phil Carter:
Yeah. I think where people get in trouble with benchmarks, is they try to make them the end-all, be-all. They try to do more with them than they really should be. The way I think of benchmarks is they're one tool in a very big toolkit of things, that founders, CEOs, indie developers, product and marketing leaders, individual PMs and marketers can use, to get a directional, high-level view of where they're performing better or worse than other peers in their set.
And use that information to start to hone in on where some of their biggest strategic opportunity areas are, and where they should be allocating more or fewer resources. Meaning marketing dollars or engineering bandwidth, in order to more efficiently capture value. More efficiently increase their A/B test hit rate, increase the amount of impact that their teams are delivering. But there are a number of real limitations to benchmarks.
So a few of those are they can be very inaccurate and unreliable. That's the first problem to solve for, is can I actually trust this data? And if you just go and Google or now, I guess, go to ChatGPT or Claude, or whatever, and you put in what are good benchmarks for this product? You'll get something and sometimes you'll get an answer that's better than other times. But you can't necessarily rely on just whatever metrics you find on the internet.
Jacob Eiting:
The LLMs are really bad at understanding caveats and nuances on these things. They do a really... I'll sometimes just throw some of our RevenueCat's core metrics in and be like, "Hey, where's this put us?"
And sometimes it's fine. Actually, ChatGPT specifically is pretty good at citing sources sometimes where they got the data from. But yeah, it's not the most reliable way of pulling it.
Phil Carter:
Yeah. So that's the first-order problem is can I actually rely on this data? Because if you can't, it's garbage in, garbage out.
Jacob Eiting:
Well, how is it collecting? And you really have to go deep if you want to really understand, which at that point, you should just go ask 10 apps.
You know what I mean? The more time you spend trying to understand the qualifications of the benchmark, the less useful it is.
Phil Carter:
Yeah. So first-order problem is you got to make sure they're accurate. Many of them are not accurate. The second-order problem is, "Okay. Even if you're able to find a reliable source of benchmark data, oftentimes it's too generic." Some of the largest and more reliable datasets out there that provide subscription app benchmarks, are providing them globally, not for a specific country.
Or they're providing them across all app categories, not specific to health and fitness, or media and entertainment or productivity, all of which have very different average metrics. So that's the second-order problem is making sure that you're comparing apples to apples, and getting benchmarks that are specific enough to be actionable.
And then the third-order problem is, "Okay. Even if your benchmarks are accurate, and even if they are specific enough to be actionable, they're only a jumping off point." They're not going to actually tell you how to improve your performance. They're just going to tell you where your performance is lacking. So even in the best-case scenario, they're a jumping off point.
And then from there, you need to apply your own understanding of your product, your business, your target customer, your competitive set, in order to make intelligent bets on what initiatives to pursue in order to increase your growth.
Jacob Eiting:
One place I see people get hung up is on that third step, and I think it maybe comes from a lack of understanding of the production process, which is your first two points of these benchmarks. Which is looking at these things in isolation, just being like, "Oh, my trial rate, I need to do things to make my trial rate better."
That's not what a benchmark is telling you. It might be correlated with that being a true and correct action to take, but it can't tell you that. All it can tell you is for this measure, assuming you get the systematics of it correct, here's what this measure is and where it puts you in the distribution.
And I think sometimes what people do is they fail to think about the process, like the inputs that led to that, which can be even within a category, can be nature of your app. Who you are in that subcategory, what's your differentiator? And folks can just dive in on like, "Oh, I got to fix conversion rate or churn."
This is one where I've seen a lot of people maybe spend too much time on retention, which sounds like you should, "Oh, retention." People look at the model and they go like, "Oh, it's one over. If I get just perfect retention, I'll be a trillionaire." And there are physics limitations to how far you can push that, which I think that's where good use of metrics.
If you can understand what that particular percentile means for you, can guide you to maybe be like, "Okay, I can back off on this because I've maybe hit the low-hanging fruit." But that's often where I see people, they lose the forest through the trees. They see one thing and they over-obsess about it.
And then they get frustrated because they're like, "Oh, yeah, we worked on our trial start rate for two years and we went from 10% to 12%." Which maybe that's significant, but there might've been other things you could do to really push the business.
Phil Carter:
There's almost the midwit meme here of on the one end of the spectrum, it's like just build a good product. And then on the other end of the spectrum, you've got the Jedi master and it's just build a really good product. And then in the middle it's like, "I'm going to look at 100 different metrics and I'm going to optimize every last thing."
So this gets to the double-edged sword of benchmarks. I think they can be a very tool in your toolkit. Ultimately, you have to pair the science of the data and the metrics that you're using for the benchmarks, with the art of I'm going to apply my own intuition to understand why I'm seeing variances in certain places.
And then the next question is, "Okay, I'm seeing these variances. Is this a gap that I can fill based off of improved product experience or improved marketing?"
Jacob Eiting:
Yeah, or do I actually know how to fix it?
Phil Carter:
Yeah, or is it a gap that's exogenous? Because for example, I'm a dating app and so I'm going to have high-churn rates.
Because if I have a good product, people are naturally churning off the platform.
David Barnard:
Or is it inherent to the business model too? If you're a freemium app, by definition, your trial start rate is likely to be lower. If you're freemium, you probably shouldn't even do a free trial.
So you're looking at some of these benchmarks and you have to analyze it through the lens of the business you've created, and how you want to operate your business. You can go hard paywall, that'll juice your trial start, but is that what you're optimizing for?
It's like you need to understand them through how you run your business and how you're trying to grow your business, and what you want your business to become.
Jacob Eiting:
I'm sure we've talked about this probably, Phil, in the past, but the Dutch dam analogy of you stick your finger in one hole, and then another hole is going to pop open. And it's not so binary, but there is this also this interlinked nature that often by pushing on one of these benchmarks, you're going to affect the others and you have to keep that in mind as well.
The best thing you can learn, at least talking for me, looking at B2B benchmarks and stuff like that, I think it's just to give you directional insight. I think, which is your point you were making before, Phil. It's like feed it with your intuition also, and maybe this is, I'll say, move progressively to the left on the midwit meme.
Over time, I've just more and more, let me just do what feels good and feels right, and then if it tracks to a metric, that's great.
Phil Carter:
I'll draw from my own experience here. So when I was leading product growth at Quizlet, there were basically two ways I went about trying to find benchmarks. The first was the one we talked about earlier, which is go on Google.
At the time there was no ChatGPT or Claude, but go on Google or find some other publicly available source of information, and just ask what's a good benchmark? That was very fast and easy and broad, but unreliable.
Jacob Eiting:
For me, it was the Evernote Series B deck that everybody used. That was the only consumer subscriptions silver lining in 2014 or whatever.
Phil Carter:
Yeah. But it's like bucket one, find broad, overly generic, publicly available resources, fast, easy, not particularly reliable. The second bucket was on the complete other end of the spectrum, which is like I had my network of other product leaders, PMs, marketers at given Quizlet was an education company, Chegg, Course Hero, Photomath, Duolingo.
So we would periodically jump on a call or just compare notes at a high level on, "Hey, roughly what's the band of what we can reasonably expect for something like signup rate, trial start rate, trial conversion rate, average revenue per user, LTV?" But then you get into another problem, which is one, a lot of even within education, you have lots of different flavors of products. So it still can be pretty apples and oranges.
The other problem is if you're too apples to apples, you're obviously not going to share sensitive data and information, because then you're sharing data with direct competitors.
Jacob Eiting:
I think this is something, I don't want to keep drawing this off the trail here, but I think this is something people, unless you're in a dead heat with somebody on fighting over individual user acquisition spend. It's like what's somebody going to do if they know your trial start rates lower than theirs?
It's like how is that possibly going to change their? Which is one of the cool things in consumer subscriptions, I think, is when you are competing, there are some secret sauce things around this stuff. But a lot of the times you can also, nine times out of 10, you can just go look and copy what your competitors.
So it's not like there's anything secret that is able to be hidden, unless you're using some crazy modeling or something like this to acquire users. It's good app, good creative, good strategy. There's no super secret sauce, anyway, sorry, sidebar.
David Barnard:
Now that we've totally thrown benchmarks under the bus, we've got a whole rest of the podcast to talk about what it is-
Jacob Eiting:
Should we just not do SOSA? We'll figure something else out.
David Barnard:
Let me take a stab at framing the rest of our conversation. I was actually writing this morning. I don't know if this will turn into a tweet or a blog post, but the whole ideas versus execution has always really bugged me, and I haven't ever been able to fully put my finger on it. I think this morning, I came to peace with the idea, and here's the thing.
What we hand-wavingly call execution is really judgment. It's like there's a ton of ideas. Ideas are a dime a dozen, but inherently, good ideas aren't a dime a dozen. There are good ideas, and so what execution really is, like great execution isn't just doing work. It isn't being good at programming or good at marketing, it's a filtering of ideas into the good ideas and then executing on those.
So I think what we can talk about with the Subscription Value Loop Calculator and where benchmarks can add value, is as an input to your judgment, not as a decision maker. Not as the end-all, be-all, but as one of many inputs into your judgment of which idea is good, which idea is bad. If you listen to the Sub Club Podcast, you're going to get a million ideas of like, "Oh, we should do this on our paywall. We should do that on our onboarding."
And you can't execute on all of those, and so the great products and the great product leaders, are the ones who have the judgment to actually pick which ideas are good and then go execute on that. And I think the calculator and the benchmarks can be a good input into those judgment decisions, those millions of small decisions you make along a product journey.
So that's why the Subscription Value Loop Calculator. Go look at it and bring it up while you listen to the rest of this podcast, and make better judgments along the way.
Phil Carter:
I think that's exactly right. It's both working on good ideas, but also working on good ideas that are targeted at the right problems. And that's where, I think, a lot of companies get in trouble is they were startups, they have limited resources.
So in some cases, they're executing on good ideas, but they're good ideas focused on the wrong problems, which aren't where they have the biggest upside, and so they don't see a lot of impact from it.
David Barnard:
Or they're focusing on good ideas that are good ideas two years from now, that aren't good ideas today as a startup.
Jacob Eiting:
Yeah. I guess that you could put that under judgment, but that's also just entrepreneurial wisdom or just focus and good at resource allocation. Resource allocation is really the game, and that can be other people that could be at the beginning, you, individual time, and attention and dollars. But I'll pose or throw a counterpoint here to picking good ideas, and that I think this feeds into the topic.
But that I think even the best idea pickers in the world, have 51% hit rate. Do you know what I mean? Or they barely exceed the median of what everybody else does. In terms of that execution bit, it's like you need to have that edge of judgment, but then you just need to have consistent application. Because that edge will compound if your competitors or your counters are only making choices 50% of the time that are correct.
And you're doing 51%, each cycle is a 1% compounding advantage. And then also if you can shorten your cycle time, that's another linear increase in your growth rate in terms of discovery. And this sounds very esoteric, I promise this is related to how to use benchmarks and things like this, and determining each day, what is the action we're going to take today to actually move the business forward?
I think we keep selling and anti-selling this concept, because I think the trap that we're trying to tell people to avoid, is that you can very easily bury the advantages of these benchmarks in good data-driven decision by delaying decision-making as well. Often, if you spend too much time trying to pick the perfect decision, you've quickly eliminated any advantage of a great decision by not doing anything.
David Barnard:
Next thing I wanted to talk through is how you apply the Subscription Value Loop. You run a consulting business, you talk to a ton of apps about their business.
You help them make decisions via data, via judgment, via shooting from the hip, but you help make decisions. So that's the next place I wanted to go with this, is how do you use this tool? How do you use benchmarks to help make better decisions in the companies that you work with?
Phil Carter:
Yeah, it's a great question. Usually, this is a tool that I use right at the outset in the first week or two when I'm engaging with a new client, and onboarding and getting to understand their business better. And there's a few advantages to that. One is it's just a good forcing function to get all of the most important growth metrics for the business in one place.
And some clients I work with are very sophisticated and they've already got all of this organized. They've got a growth model, they've got their mixed panel with their amplitude dashboard, and everything's dialed in. Other clients, especially some of the earlier stage clients I work with or indie developers, this may be the first time that they're pulling some of these metrics together and seeing how all the dots connect.
So that's one advantage of doing it right away. The other advantage for me, is it's just a really efficient way for me to get it up and running quickly. And get the quick diagnosis of where are the biggest bottlenecks in the company's growth engine? And where might there be some quick-win opportunities to start putting points on the board? It's almost like X-ray vision into where is the biggest problem in the loop?
Is it value creation? Are they not actually creating a product that is resonating with users? Is it value delivery? Are they not being efficient about acquiring those users or is it value capture? They've got a great product and they're acquiring users efficiently, but they're not actually converting anybody into subscribers.
Or their price is too low, they're not getting enough revenue back per subscriber, and so it's usually a diagnostic tool that I use in those first two weeks. We don't spend a ton of time on it. Honestly, it's like I share the tool with the client. And in a one-hour call, we pull a lot of the metrics into the dashboard and we see the heat map of where the biggest gaps and opportunities are.
And then I go off and do some additional analysis at a more granular level, to figure out what some of the highest priority growth initiatives might be for us to work on together.
David Barnard:
Having done this with a bunch of clients now, and having worked on the calculator for months and months, and written a blog post about it and all that other stuff.
Do you have any key takeaways or insights that are examples of what you might want to learn from putting your data in the Subscription Value Loop?
Phil Carter:
Yeah, sure. One relatively recent example, there's an EdTech client that I've been working with, whereas I said in the first week, we dug into the tool, put a bunch of the metrics in. And one thing that became very apparent was subscriber retention was really strong, so they had a product that was resonating with users.
They were able to retain their subscribers over a long period of time, better than even the 75th percentile of education apps, let alone the average. But their biggest issues were, number one, their subscriber conversion rates were quite low. And number two, their pricing was higher than the typical EdTech app in their category.
And then the third problem they were facing was their paid advertising efficiency was lower than you would hope to see. And some of that was being driven by a price that was a little too high, and an onboarding flow and a paywall that wasn't fully optimized. So those metrics didn't tell the whole story, but they gave me a map of where we might begin to explore opportunities.
How do we improve new user onboarding to convert more free users into trialers? How do we improve the paywall to maximize the excitement of those new trialers, and ultimately convert them into subscribers? And then how do we optimize pricing in order to make sure that people are more willing to pay for the subscription once their trial is over?
So that basically laid out the roadmap for the first three months of our engagement, and there were a number of wins that came out of new user onboarding optimization, paywall optimization. And then we ran a subscription survey that confirmed a lot of the same stuff that we had seen in the Subscription Value Loop Calculator.
But got into a lot more detail on the specifics of what needed to change in order to shore up some of those metrics.
David Barnard:
Yeah. Any other examples before we move on to talking through a spreadsheet on a podcast?
Phil Carter:
Yeah, a couple more examples. So there is a fitness product that I've worked with in the past, that ran into a very common challenge that consumer subscription apps run into. Particularly in the fitness category, I know Strava has run into this issue, which is giving way too much value for free.
So the way that that was showing up in the metrics, was their price was relatively low compared to their peer set, and their subscriber conversion rates were relatively low. And you can interpret that a lot of different ways on its face, but again, we ran a subscription survey.
We got into the weeds around what both free users and subscribers were saying about their free value promise, their premium value promise. And one of the insights that came out of it was the free product is so good, do I actually need to pay for the premium product? And what is my willingness to pay for that premium product?
So again, it didn't give us the full picture of exactly what we needed to go do, but it told us where we should be focusing our resources.
David Barnard:
Awesome. Well, let's dive into the calculator then. Some of you may be listening in the car, or on a walk or whatever, pulling up the calculator at some point or listening and then going to pull it up would be super helpful to help all of this make sense.
But the first thing I wanted to go through was in the calculator, you have several things that the person needs to input. It's like you have to put in some metrics. And I want to go through those and why you included those, and the importance of them in the calculator kind of helping to build that map of where opportunities might lie.
Phil Carter:
I'll lay out the 30,000-foot view of how the tool works, and then we can drill into whatever details you guys are most excited to talk about. But at a high level, one of the requirements for me with a tool like this is it needed to be simple. So it's a single spreadsheet, you go in, there's a column where you enter a couple dozen metrics. Usually, I recommend that companies enter the average across their last 12 months worth of data.
Just because if you have any seasonality in your business or if there are other peaks and valleys, a year's worth of data can help to smooth that out. Obviously, the caveat would be if you had a major change to your paywall or your pricing, or some other big variable that could affect the metrics within the last 12 months, just be careful about how that could influence the metrics.
But otherwise, look at the last 12 months, take the average performance over the last 12 months. Input that into the column, that's your company's data, and then you select the category of app you're in. So health and fitness, media and entertainment, photo and video productivity, there are 11 different categories or you can just look at all categories combined. And the second filter is performance tier.
So you can look at the 25th percentile, the 50th percentile, the 75th percentile or the 95th percentile of apps in terms of how they perform. And that can be helpful as a proxy for your stage. If you're an indie developer that's just getting started, I would say focus on P50. You want to be aiming for be better than average.
If you're already a venture-funded startup and you feel like you've got a pretty strong product already, you probably want to focus on P75 or even P95. So that's the second filter you can use. And then once you've put in your app category, your performance tier and your business metrics, the tool immediately outputs your performance delta relative to your peer set.
And it shows you a heat map in red versus green in terms of where you're over or underperforming. So pretty quickly, within a few hours of doing this exercise, you've got a very basic roadmap of, "Hey, where are we over and underperforming? And where might it make sense for us to make strategic bets?"
David Barnard:
Once people have done all that and they're looking at the heat map, there's a bunch of different sections that you're going to get this heat map delta.
So I wanted to dive into those and it's so tricky, blended versus paid, how much are you spending? Are you more organic driven? All of these things are going to factor in.
So how do you think about that first section around payback period, LTV to CAC? Is that blended CAC? Is that only paid CAC? How do you figure all that out?
Phil Carter:
So the way I look at LTV over CAC ratio and payback period, is there's good barometers over the overall health of the business, the strength of the unit economics, and the efficiency of the company's growth engine.
So you're not going to be able to prioritize a roadmap for your product or marketing team based on these metrics, but you'll get a good overall sense for the company's performance. So to me, LTV over CAC and payback period are the outputs of this tool. They're what's telling you, okay, if you want a target LTV over CAC of 3X, which is the gold standard for consumer apps.
And you want a payback period of ideally fewer than three months or even within the first month, how are you performing relative to those targets and relative to the targets of your peer set in your category? And then underneath that, you have the three sections in the Subscription Value Loop. So you have value creation, value delivery, and value capture.
And each of those steps have their own component metrics that drive those steps in the loop, that are ultimately leading to the output of LTV over CAC and payback period. So when I look at the tool, I'm looking first at LTV over CAC and payback period to say, "On the whole, how is this company performing versus its peer set?"
And then if they're over or underperforming, the next question is why? Where are the bottlenecks? What are the specific metrics where the company is most underperforming that we can go focus on first? So then I drill down into value creation, which is measuring how effectively the company is creating value for users.
Value delivery, which measures how efficiently they're acquiring users and subscribers. And then value capture, which looks at how efficiently they're monetizing those users.
David Barnard:
I do want to talk more about LTV to CAC and blended versus not and all that, but we can get into that in the value delivery section of the data. But the first section is the value creation, and there's a few metrics there. There's signup rate. You discussed this in the blog post, and we were discussing this before starting the podcast.
What you mean by that is registration rate, how many people actually register? Because most apps these days, are going to have some kind of registration wall where you want to collect an email, you want them to create an account. I have not done that in my dinky side project apps and have regretted it for a decade now.
And I still, it's not something I fixed yet, but most apps, if you're trying to build a real business you need.
Jacob Eiting:
That's like the most, that's what I tell people too, but it's like the meanest nag to be like, "Yeah, if you're trying to build a real business. If we're just playing around here, then don't have a signup."
But I was asking Phil about it because I didn't exactly know what it is. And this is one of the pieces of data that doesn't come from us, doesn't come from RevenueCat, because we have some distinctions around signed up versus not, but this is from the survey data, correct?
Phil Carter:
Yeah, this is one of the metrics from the survey data. It's basically account registration rate, so what percentage of installs are converting into registered users?
And it's not directly feeding LTV over CAC ratio and payback period, but it's an important proxy metric that's further upstream.
Because in most cases, if you haven't registered an account, it's very unlikely that you are going to pay for a subscription, which is ultimately what's driving revenue.
David Barnard:
Yeah. And then if you don't have their email, you can't win them back. If they don't sign up, you can't do so many of the other parts of value delivery and value capture if you don't have their attention.
And I think you had mentioned that activation rate is really the metric you would want to have in this report, because you want an activated user.
Whether they sign up for an account or not we just said is important, but what you really want is somebody who's experienced the value, somebody who's activated, but that's so fuzzy.
Jacob Eiting:
A signup could be an activation, it just depends on how you define it, right?
Phil Carter:
Well, and this goes back to what we were talking about as far as the reliability of data, activation rate would be a great metric for this tool. The reason we haven't included it so far, is because the activation metric across different products looks so different, and so it's really hard to compare apples to apples.
So for that reason, signup rate or account registration rate is a good enough proxy for are we getting enough installs to take that first step of creating an account? And then you start to get into the other value creation metrics, which are mostly around retention. So both for monthly and annual subscribers, what is month one, month two, month three, month six, month 12 retention rate?
What is year one and year two annual retention rate? And then that data can be used to calculate an average lifetime for both monthly and annual subscribers, which is obviously critical for driving those LTV over CAC and payback period metrics.
Jacob Eiting:
Yeah. Which is an interesting way of collecting the data and actually looking at it, not just looking at my blended monthly, because that has a lot of effects on cohort composition. But then also stretching that out a little bit to 1, 2, 3, 6, 12, which gives, because some apps have really fast drop-off, some apps have later drop-off.
And that's probably a super good example of a case where you need to really apply context to know if you have a six-month drop-off or a one-month drop-off that's higher, lower than bench. Probably has more to do with the nature of your app and the cyclical natures, or seasonalities and things like that.
But it's good to bring that in rather than you have this metric later on that feeds in, which is the average monthly periods. Which also we were talking earlier, it's like a tricky one because to calculate to have a true understanding of because that number tends to float and have biases based on amount of time you've been collecting data.
But you can put in a rough idea that is helpful in terms of estimating LTV. But I guess, Phil, when you pull in that curve, what are some of the conditions you might look at there to be like, "Oh, maybe this customer needs to do X or Y"? If they have short-term drop-off or maybe long-term drop-off, are there examples you can think of where there's some interesting variants?
Phil Carter:
Well, one very interesting example I'll give you is I had a client I looked at somewhat recently where annual subscriber retention rate was much better than monthly subscriber retention rate.
And obviously in general, you're going to get longer retention from annual versus monthly subscribers.
Jacob Eiting:
Mostly because if you've got 50 bucks to drop, you're probably just a stickier customer, period. It's always been my assumption for that.
Phil Carter:
Yeah, they're higher intent users and they've committed more upfront. But even relative to benchmarks relative to their peer set, their annual subscriber retention outperform their peer set, but their monthly subscriber retention underperform their peer set.
And then combine that with an insight from the value capture component of the tool, which was that their annual subscription price was higher than their peer set. The insight was, "We should look at our annual subscription price relative to monthly, and potentially offer a more generous annual subscription discount."
And we should run additional optimizations on our paywall to try to nudge more users from monthly into annual subscription plans. Because we know that if we get a user to convert into an annual plan, we are significantly outperforming our peer set on annual subscriber retention, but the opposite is true for monthly.
So there is a lot of upside, even more so than the normal upside to converting more users into annual subscriptions. So you can start to see how some of these pieces fit together. It is hard talking about a spreadsheet on a podcast, but you can see how the pieces fit together and how it leads to actionable next steps a company can take based off of the numbers you're seeing.
David Barnard:
This is one I wanted to dive into some of the caveats though. I was talking to an app recently that has below median retention rates. I think their annual retention rate was something like 30%, but they're driving most of their traffic through paid ads, and they're getting day 45 return on ad spend. I think this is like what business are you in? What business are you building? How are you operating your business?
And when you see that and you think, "Okay, that's where we need to focus. We got to get retention higher." Yes, but the way you're running the business and being able to get that 30-day return on ad spend, 45-day return on ad spend, you're inherently bringing in a lot more people and lower intent users. And man, if you're retaining 30% and you're getting 45-day ROAS, you can stack those cohorts over time.
Jacob Eiting:
Stop worrying.
Phil Carter:
This brings up another great point, which is this is oversimplifying things, but there is a little bit of a dichotomy I've seen between earlier stage consumer subscription businesses, in some cases indie developers, that have built really efficient engines at rapidly converting free users into subscribers.
Making sure their D45 ROAS or even they're like in Opal's case, D8 ROAS is really, really efficient. So they are printing money in the short-term and that's great, that's a great, small business. But then on the other end of the spectrum, you have the larger consumer subscription businesses that are trying to get to hundreds of millions, if not billions of dollars in valuation.
It's a much smaller ecosystem, but that's where what got you to greatness on the first category, won't get you to greatness on the second category. If you're shooting for one of those really big-grand slams, then just having really efficient D8 or D45 ROAS won't get you there. Because you need to find ways to continue to grow organically, so that you're not overrelying on paid ad spend to begin with.
You need to find ways to increase subscriber retention, because that's the foundation for everything else. So this is where one of the other filters we want to add to this tool is company stage. Because if you're a seed stage or Series A startup, by all means focus on just being really efficient at acquiring users early on, and making sure you're getting fast payback periods.
But if you're a Series C startup and you're shooting for a billion dollar valuation and eventually going public, you have to look at the metrics in totally different ways.
David Barnard:
You're not going to get there with 30%. It takes too many years.
Jacob Eiting:
I don't even think it's possible. Because what happens is as you scale that you're going to hit, you're going to expire those users because you didn't build a reliable. And the corollary to what you were just saying, Phil, seed-wise, this is like, "What game are you playing? What's your game?"
I've seen this, it's not necessarily a bad thing to be like, "Yeah. Let's rush to get some sort of engine of something here." Because often, that can be the chip that or the lifeline you need to then worry about the next stage.
Double-sided sword, again, sometimes you can get in there and then you get stuck with this suboptimal locale that you get stuck in. And now you've maybe traded in some brand that's hard to recover and things like that, but again, depends on the game you're playing.
David Barnard:
But if you're spinning off cash, to Phil's point earlier, then it's easier to make those next investments.
Jacob Eiting:
Right, exactly. Yeah. You earn the right to be like, "Okay, how do we actually reinvest?" That's how businesses historically not using outside capital, is you generate some free cash flow through some positive ROAS. And then you can reinvest that in R&D and other things like that that can ideally.
I made the comment about gears, but it's like you start in first gear and it's maybe just this rapid just get some money back. And then, "Okay. Great, we've expired, we've topped out in first gear." And now we have to think about, "Okay, what's second gear? How do we increase our leverage and actually go a little bit further?"
And that will change. This is why this is not a do once and forget sort of thing. So maybe we should jump on, I want to keep us moving, but maybe we should jump onto the value delivery, the inputs to this, because I think this is interesting. You talk about cost per install and it made me think about RevenueCat's system is different.
But when I think about CPI or cost of acquisition or whatever, I just take number of installs, which would be for us like a signup, and I just divide it by my entire sales and marketing budget. So I'm curious, how do you suggest people put together a rough number for that? Is it very inclusive, or is it just literally like, "How much did I spend on Facebook?"
Phil Carter:
The way I think about this is there's aggregate cost per install, and so that can be across all of your different paid acquisition channels and can also include organic acquisition. So you're taking an overall blended cost per install, and that's most helpful for looking at the summary metrics of LTV over CAC and payback period as an entire business.
But then if I'm a performance marketer and I want to evaluate the efficiency of my individual performance marketing channels, Facebook, Instagram, TikTok, Google, whatever the case might be. Then I'm looking at a very different set of metrics, which is my paid cost per install through each of those individual channels, so that I can compare the efficiency across all of them.
Jacob Eiting:
But you have to look at them like they're certainly only useful in relatives with each other, and this is where benchmarks breakdown as you zoom in. Because other people, what they consider are cost per install, or a cost per X may or may not be you include your own salary if you're a performance marketer. Do you know what I mean?
And that might be a bigger question for founders and stuff to determine how they do stuff. My two cents would be if you're trying, and I think, Phil, with the vision of this, is it's to be very high level. So if I were you, I would probably include whoever's working on day-to-day ad creative, all that stuff, that probably all should go in.
Maybe not if you're just trying to evaluate a model for just trying to, as you said on the performance market, rip a big ad spend and make sure you're making that money back, and it stays above the line or whatever. But if you're actually thinking about running a business, you probably should be considering those costs as well.
David Barnard:
I think that gives for a better blended metric too, because the whole idea of the blended metric is that you're accounting for organic.
So if somebody's creating TikTok videos and you're not putting money behind them, it's not part of your paid spend, but you're paying for those installs. So if you have 20 people creating organic TikToks, like that's marketing spend, that's paid.
Jacob Eiting:
So what do you consider a cost of organic?
Phil Carter:
I think there's two different ways of looking at this depending on what your goal is. If you are the VP of finance and you want to understand exactly where all the dollars and cents are going. Then absolutely you want to find a way to load in the cost of your marketing team or the cost of your sales team in the case of a B2B business.
But if you're looking at the health of the business in terms of the unit economics, then generally you're not going to layer in SG&A costs. Because you want to focus on per dollar of marketing spend, how many users are you getting back?
And the reason for that is, as you scale into a larger and larger company, then the percentage of your cost basis that's coming from things like a marketer's salary, gets to be a smaller and smaller and smaller percentage. So the way I've built this tool, it's very much focused on the unit economics.
So I'm not looking at the cost of the sales and marketing team in these value delivery metrics. I'm looking at the blended average cost per install, cost per trial and cost per subscriber. When you look at users you're acquiring organically, that could be through viral word of mouth, it could be through posts on social media.
But installs, trials and subscribers you aren't directly spending marketing dollars on, averaged against your paid marketing spend across it. Yeah.
Jacob Eiting:
Which I guess actually is maybe the inverse of the question of using these benchmarks correctly again, which is because at a smaller scale and which you want to be careful of. And what I mean there is you have to think about what is your actual motion, and what percentage of your efforts and energies?
Because that's what ultimately you're trying to do, is get some insight about how you should invest that incremental energy, and where's that falling now on these inputs? And they might actually be correlated, but you're right, at scale, these systems are leveraged. So one performance marketer can do 10K a month in ad spend, 100K, a million probably with a similar amount of headcount potentially.
Phil Carter:
Exactly right. Yeah. As you get larger and larger, what really matters is your marketing budget more and more so versus the cost of your marketing team.
And so the tool is meant to look at it from a unit economic standpoint.
David Barnard:
How do you think about, bringing it back to the earlier section, how do you think about payback period and customer acquisition costs in relation to all these things, and then especially the blend between paid and organic as well?
Because I think for some companies, who have amazing organic and start spending on paid, it's easy to look at the blended and then let those payback periods extend further and further out into time. Because the money's just rolling in because you have such great organic base.
But how do you think about balancing all those things of how much you spend? What you expect of your spend versus what you expect of your organic? And how do you balance all those things?
Phil Carter:
I think the most efficient way to answer that is it goes back to the science and the art of interpreting benchmarks like this. So if you look at your value delivery metrics and you see that you're very, very efficient on cost per install, cost per trial, and cost per subscriber, there are a few different reasons that could be driving that.
One is you're getting the vast majority of your acquisition through organic. You might be really, really inefficient at paid marketing, but that's being offset by a huge percentage of your users and subscribers coming through organic channels, so that's one possibility. A second possibility is the opposite.
You're getting a good number of your users and subscribers from paid channels, but you've got a really good performance marketing team and you're really, really efficient at acquiring those users. And there are a few other flavors of what could be driving numbers like that.
So you start with the science, which is like, "Let me look at the numbers and let me look at where I'm over and underperforming." But then you got to get into interpreting why is this the case? So another example I'll give you of a client I've worked with, where we specifically looked at these value delivery metrics, was they were acquiring a significant percentage of their users through paid channels.
So they were very, very efficient. If you looked at their paid cost per install, cost per trial, cost per subscriber, they would've been among the best in class at acquiring users through Facebook. But they were overreliant on paid acquisition spend overall in terms of acquiring users relative to organic.
So one of the first things that I worked on with that client, was figuring out how to them off of the drug of paid acquisition and how to acquire more users through word of mouth or SEO, or other non-paid channels.
David Barnard:
That's a great example. And even my own app, I have this side project where I talk about it on the podcast a bit, I haven't spent any money on paid acquisition this year, and I made like 100K.
So if I plug those numbers in, my LTV to CAC is incredible. It's amazing.
Jacob Eiting:
Again, it goes back to my point, nothing else went into making that 100K happen at all.
David Barnard:
To the moon.
Jacob Eiting:
It was just free money.
David Barnard:
But yeah, everybody has to figure out what business they're building, how they're going to build it.
Why the blend that they're seeing, why the numbers make sense, and how they want to move forward from that. So great context for all of that.
Phil Carter:
When you're an early-stage startup, number one, you're going to get more. Hopefully, you're able to get more user acquisition organically than you will once you start to get beyond your early adopters.
And you have to start paying more and more money to acquire users because they're just lower intent. You're outside of your ideal customer profile. And then the second thing is, across all of the other metrics, value creation and value capture, early adopters tend to outperform the later majority.
So what that means is you have to interpret these metrics through the lens of is this sustainable? So to use your example, David, if you're an indie developer, an early-stage startup, and the vast majority of your acquisition is organic, and so your LTV over CAC is to the moon. That's great in the here and now, but it won't last.
Eventually, if you scale to a certain size, history tells us every consumer subscription business at some point is going to hit a point where they have to spend more and more dollars on performance marketing in order to sustain growth if that's what they want to do.
So it's another great example of where you have to apply your own intuition in looking at this data, and projecting ahead to how the numbers are likely to change as you scale.
David Barnard:
Yeah, so let's talk about that next section, value captures.
You've gotten the person in the app, you need to monetize them. So let's talk through some of the metrics there.
Phil Carter:
Yeah. Actually, I've said this before, I think value capture is of the three steps in the Subscription Value Loop, I actually think it's the one that most often gets overlooked, because especially early-stage companies, you have a lot of product driven founders. They want to over-deliver value for their customers, and so they're rightfully so investing a lot of their resources into just building a great product and getting people to talk about it.
But at some point, you have to capture resources back from your best users, who are generally your subscribers, in order to reinvest into the business and keep growing. So value capture is all about that, it's all about converting free users into subscribers.
So it looks at both how efficient you are at converting users through the subscription funnel, trial start rate, trial conversion rate, and install-to-paid conversion rate. It also looks at what your pricing looks like. So what is your annual/monthly subscription price?
Right now, I don't have weekly subscription plans in the tool. That's another thing we could add in the future, but it looks at your price for each of your different subscription plans and tiers. And then it also looks at things like your subscription plan mix. So what percentage of your subscriptions are annual versus monthly?
And your gross margins. How much of the top-line revenue that you're getting per subscriber, are you actually retaining as a business after you've paid things like app store fees?
David Barnard:
One of the things that I really want to do at some point, maybe like State of Subscription App Report 2027, when we've got 10 data scientists and a whole team around this and version five of the Subscription Value Loop Calculator, I would love to do a full-funnel benchmark.
Because when you get to this stage and you're starting to say, "Oh, well, my trial start rate's really low, but then your trial conversion rate is amazing." So overall, your trial open to paid is really high, but your trial start rate's really low. I think if we were able to do full-funnel benchmarks and look at the apps that have higher stats here.
And then going all the way through retention, not just looking at subscriber acquisition, but actually going all the way to two, three years down the road. And then that's where you start to see the benchmarks like very few people are going to be P95 on every single metric along the entire chain.
Usually, you're going to have a strength somewhere in the chain that you're the real outlier in that, and that makes the business work, even if you're below benchmark on another metric.
Phil Carter:
That's a great call out, and you see that anecdotally with a lot of the top consumer subscription apps in different categories. It's not like they're P95 across every single metric, in fact, it'd be really hard to do that.
Because by definition, if you're really, really efficient at converting free users and subscribers, you're probably going to have a harder time retaining them, because you're getting some lower intent users into your subscription funnel.
Similarly, if your prices are really, really high, so you're in the highest percentile on how much revenue you're getting per subscriber, that might negatively impact your ability to convert subscribers in the first place. So there's natural checks and balances on some of these metrics.
I think the important thing, and what I generally tell my clients is ideally, you're finding a way to be good enough across the majority of these metrics. You don't want to have too many areas where you have glaring deficiencies. Again, there are outliers like dating apps where you're going to naturally have higher churn rates.
That's part of why you have the category filter. But you want to do your best to be within striking distance of at least average, if not P75, across as many of these metrics as you can be. And then more often than not, there are one or two metrics where you are just outstanding. That's where you really outperform.
So Tinder is a great example of a company. I teach a case study on Tinder with Ravi Mehta, who's the former CPO there in my course. And Tinder and some of these other top dating apps have gotten really, really good at both converting new users into subscribers, and offering different subscription tiers that maximize the amount of consumer surplus they're capturing.
Because they know that they're only going to be able to retain subscribers for so long. So their subscription retention rates are never going to be as good as a lot of category-leading apps in other categories. But they can make up for that if they're really, really efficient at capturing value from users in their first three to six months.
David Barnard:
The other lens, I think, to think through this section with is stage as well, which you brought up earlier, is that I think Duolingo is a good example. I haven't gone through their Q10 and looked at their publicly shared stats to try and verify this, but I would assume they're like app open or their signup rate, their activation rate to subscriber rate historically has been on the lower side.
Because they're driving a ton of usage in that freemium tier and they've built up this massive freemium base. Well, what are they doing now as a public company worth $10 billion? That's now their opportunity to improve on that, the massive free user base, to start monetizing them better through ads, through converting them to subscriptions, through other methods.
But then also to get better and better at converting the new free users coming in into paid subscribers, so there's also a time component of this. It's like Duolingo is obviously P95 in several key metrics, and then the areas where they're not good is now at $10 billion valuation. There's a lot of levers for them to pull to get better at those.
Phil Carter:
I think that's exactly right. And if you look at how Duolingo's product has evolved over the last few years, a lot of people don't know this. Duolingo spent the first five years as a company focused exclusively on the free user experience.
So they launched in, I think, it was 2011 or 2012. They didn't introduce their Duolingo Plus subscription until 2017, so they didn't start monetizing their user base really at all.
Jacob Eiting:
You know the elevate story that we were doing brain training apps that were paid upfront and then Duolingo came out?
And then it was like, "Okay, we can't win that." They did a really good job of turning venture capital into a monopoly essentially.
Phil Carter:
Right. Yeah, it becomes a real competitive mode at some point, but then at this point, Duolingo is a really large company. They're worth over $10 billion now, and so the cost of acquiring incremental users gets higher and higher. They fight against that by creating a better and better and better product, and now they're expanding into math and music.
So that's unlocking early adopters in new categories. But still on the whole, they're having to expend more resources to acquire the marginal customer, which means they have to get more efficient about capturing value from their subscribers or from their free users. So if you look at Duolingo now, they're adding more ads to the free user experience.
That means they're increasing the LTV of non-subscribers. It also means that they're able to nudge more free users into subscribers because they get annoyed at the advertisements. So they'll say, "Fine, enough is enough. I'll pay for Duolingo Plus or Duolingo Max." So you can see that strategy playing out in their product evolution.
David Barnard:
So I did want to give you a chance to talk briefly about the future of the Value Loop Calculator. I know you've got some stuff in the works and the 2025 report will be out soon.
And you're going to incorporate that data into the calculator, so it'll be updated in the relatively near future. Any other updates top of mind?
Phil Carter:
Yeah. Well, first of all, I do want to thank, I know this is a RevenueCat podcast, but I do want to thank Rick and the rest of the team at RevenueCat that helped pull this first version of the tool together. Without the dataset, the tool wouldn't exist. That was a great experience and I'm really excited to partner with you all again next year, once the 2025 State of Subscription Apps Report is ready to do V2.
I think a number of things that will happen with V2, the first and most obvious one is you guys are growing quickly, and so the sample size of apps is just going to keep getting larger and larger. I think it was 30,000 apps for this first version. I assume it will be a larger number for V2. I think that the second thing we want to do is add more filters.
So I mentioned before just how critical it is to compare apples to apples, and make sure that you're looking at apps within your category, within your performance tier, but you also ideally want to be able to look at apps within your geography. We know that US users tend to monetize at higher rates than international users, but they're also more expensive to acquire.
So being able to cut by US versus Europe, versus Japan, South Korea and a couple other geographies would be a great filter. Ideally, eventually getting to the point where we can filter by iOS versus Android or even versus web would be another filter I'm really excited to add. And then these will probably be more difficult, but eventually being able to filter by company stage and a couple of other variables would be great to do as well.
And then the last thing that we talked about a little bit earlier was there are a couple of weaknesses in this first MVP version of the tool. The biggest one being that those value delivery metrics are coming from a survey. They're not coming from the data in RevenueCat's SDK. So either finding ways to get that data from RevenueCat, or finding another partner like AppsFlyer or Adjust that can pull in the value delivery metric data.
David Barnard:
Awesome. I'm really looking forward to the future of this calculator. And I think internally at RevenueCat, we've been talking about more and more ways we can help even in our dashboard surface aspects of this, and help developers spot areas of weakness and things like that.
So looking forward to your work and looking forward to what we can do to productize aspects of these ideas, but anything else you wanted to share as we're wrapping up? I know you actually have a new live course launching in January, right?
Phil Carter:
Yeah, just a few things. So number one, if you're interested in learning more about the work I do, I have a website, it's just philgcarter.com, and I'm @philgcarter across Substack, LinkedIn and X. So if you're interested in the blog post I wrote on Subscription Value Loop or the Subscription Value Loop Calculator, you can check that out @philgcarter on Substack.
You mentioned the course, so I do have a live course that I now teach on Maven. The next course actually launching in mid-January, and so you can check out Consumer Subscription Growth Course at maven.com. And actually if you use promo code, Sub Club, I'm offering a 10% discount, so feel free to use that. And then lastly is my core business is being a full-time growth advisor and angel investor at this point.
So I work with about half a dozen different clients at a time. I'm at full capacity right now, but I'll likely have capacity opening up in Q1 next year. So if you're interested in potentially working together on a more intensive basis, would love to hear from you.
David Barnard:
Awesome. Phil, thanks so much for joining us. This was super insightful and thanks for the work you're doing in the community too.
We provided the data, but you built the tool and I think it's going to be super helpful to a lot of folks.
And then thanks for being so generous, sharing how you use it and how to apply it in building better businesses.
Phil Carter:
Yeah, well, likewise. I said this when I was at the RevenueCat annual event, but you guys have really become thought leaders in the community. You've aggregated this wonderful group of people.
And it can be lonely building apps, and so having this community of other people to compare notes with and share data with is really helpful. And it's always a pleasure to partner with you guys.
Jacob Eiting:
Well, we'll have you back for Christmas next year, I'm sure.
Phil Carter:
Sounds great. That seems to be the timeline for us.
Jacob Eiting:
Yeah. All right. Thanks, Phil.
David Barnard:
Thanks, Phil. Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player.
You can also stop by Chat.SubClub.com to join our private community.