On the podcast: How to profitably scale performance marketing, hard vs soft activation, and why you should keep an extra close eye on your marketing spend in November.
⬇️ To effectively scale your performance marketing, grasp your app's funnel from the top down. Start by honing in on app installs, enabling both you and the algorithms to learn. Progress down the funnel — optimizing for different app events as you go — but be prepared for corresponding budget hikes. Throughout this journey, continually test and iterate.
⏲️ Test your ATT prompt timing, starting with first app launch. While the ideal placement for an ATT prompt may vary per app, consider starting your tests with the prompt at first app launch. Surprisingly, this timing has shown minimal impact on sign-ups, trials, and conversions in some cases. Use this as a baseline for your own tests to find the most effective timing for your app.
🎨 Feed your always-on campaigns with rigorously tested ad creatives. Start with a control and multiple variants differing in one element. After identifying the best message, test it across various design formats. This iterative process builds a portfolio of effective creatives for your always-on campaigns, where you can, ideally, leave them untouched as you continue the testing process.
⏯️ For a clear view of product-market fit, track hard activations. These are meaningful actions — such as listening to multiple stories — that reveal user commitment. Don’t make the mistake of thinking a trial-start makes a user engaged. Use these insights to refine the user journey. Make it as easy as possible for users to reach the desired level of engagement.
🍂 Master the seasonal ad cycle: November is a tough month for ad spend, so pivot to awareness campaigns to navigate the rising costs. Come December, particularly after the 15th, you’ll find a rebound in favor of digital products as e-commerce spending drops. This period, often referred to as “Q5,” is also an excellent time to leverage gifting strategies and New Year messaging.
About Hannah Parvaz
👨💻 Founder of Aperture, a full-service growth partner making good companies better by helping them to change the world in a positive way.
💪 Hannah has helped hundreds of apps grow, and was previously recognized as a 5-star mentor at GrowthMentor, taking home both App Marketer of the Year and Consultant of the Year awards. She previously worked with learning app Uptime, narrated journalism app Curio, drink app DUSK, and music app DICE.
💡 “Every app is different: Everyone needs different levels of success, but also everyone has a slightly different strategy.”
Links & Resources
[2:01] Personal growth marketing: Former IBM CEO Ginni Rometty said, “Growth and comfort do not coexist.” Hannah feels this is true of her professional trajectory — an encouraging reminder for everyone in the app space.
[5:02] Scaling performance marketing: The first question you need to ask is, How are you measuring what you’re scaling?
[8:25] The measure of success: Hannah recommends A/B testing and analytics to build out funnels.
[11:05] Post-ATT ad performance: Experimenting with creatives relies on specific goals and tests based on one control and multiple variants. Lots of experimentation and analysis are the keys.
[16:02] Conversations with customers: Android’s Google Play store isn’t quite as stringent as Apple’s App Store, but customers need to know how many trials and installs they’re aiming for in order to maximize the growth they need.
[18:17] Subscription focus: Hannah takes a blended perspective to subscription apps, looking at funnel steps and where the biggest opportunities are, then moving into product.
[20:52] Action hierarchy: Developers need to figure out how many first meaningful actions and core actions must take place for users to truly activate.
[25:31] Finger in the air: Tracking ROI in the early stages ****is a guessing game, but once reality matches expectations, the activation average always becomes clear.
[31:26] The blended perspective: Optimizing performance starts with looking at each channel in isolation and closely monitoring performance.
[34:31] All the leaves are brown: Halloween brings a curse that lasts throughout November — a tough season for marketers and advertisers. Costs skyrocket, then drop on December 1 for “Q5,” which lasts until the beginning of January.
Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures.
Sub Club is brought to you by RevenueCat. Thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.
Hello. I'm your host, David Barnard, and my guest today is Hannah Parvaz, founder of Aperture, a full-service growth partner making good companies better. With more than a decade of experience in the app industry, Hannah has mentored and helped grow hundreds of companies. On the podcast, I talk with Hannah about how to profitably scale performance marketing, hard versus soft activation, and why you should keep an extra close eye on your marketing spend in November.
Hey Hannah, thanks so much for joining me on the podcast today.
Hi, it's so nice to be on here. Thank you for having me.
Well, you are one of the most requested guests. I've had several people say "We need to get Hannah on," and I think the reason is, and I've talked about this with Thomas Petit, that when you work with a lot of different companies over time, you just build a lot of experience and get to see a lot of things that the average person who just works with one app for most of their career or for the last five years or whatever, they just don't get the variety of experience that someone like you gets working with so many different companies.
So yeah, I'm excited to really share some of those learnings that you've had working with so many different companies over the years. How did you get into growth consulting and product and all these things you do in order to end up working with so many companies over time?
Well, it was never my plan, you know. Growing up, I was like a six-year-old going, "I'm going to be a lawyer," and then all of a sudden I found myself in a little basement working with a load of people on a little app. It was kind of a bit shocking, but it started with me being in the music industry. So I was in the music industry and live music, artist management, record labels and so on, and then I got into music tech and joined a music app, and then the rest is history.
But for me, I'm someone that always has been very much a multitasker, so I learn by being challenged, and that's something that's always been the case. I was just thinking of this quote that I was talking about last week, which is "Growth and comfort do not coexist," which Ginny Rometty, who was the CEO of IBM, said. But I just always wanted to find more challenges, and I felt like always I was kind of learning by doing.
So about 2016, 17, I started doing a lot of bits on the side, coaching, mentoring, and I think that started when I went through 500 startups, for the most part. So I learned a lot and I met a lot of really interesting people. And then soon after that, I got asked to mentor for Google, and then one thing led to another, I started doing Growth Mentor, which is this fantastic platform for mentoring founders or C-suite or anyone really who needs support, and I just caught the bug, juggling lots of walls.
And then how did that turn into Aperture? In the last, I think 18 months or so, you've founded an app agency, a growth partner that you now hired a bunch of people and you're working with a lot of companies. How did you transition from that app to mentoring and side hustles to running your own agency?
Yeah, it was a year old last week, actually, so really...
I know, happy birthday. Yeah, for sure. After I left my last full-time in-house gig, which I was at for a year, amazing company, we got some incredible results. Of course, I'd been working with a fair few products on the side and doing freelancing with them or running their ads on the side or doing consulting and product strategy with them, and it just felt like the immediate and organic next step. Like, "Oh, well, I may as well formalize this. I'm loving so much working on all of these separate challenges and separate projects, so why don't I just do that full time?" And then it just kind of snowballed and snowballed.
So I was working on it on my own, pretty much, until January when we brought in our first team member, and now we're six since January, so that's really awesome, and we're still growing, so it's amazing. Aperture has been a fantastic year, you know?
Even before Aperture, like you said, you've been working with so many different companies and worked on app growth yourself, and that's what we wanted to talk about today. I hear from a lot of folks who start dipping their toes in marketing and see some potential there, run a few Facebook ads, do some app store search ads, whatever those kind of first steps look like for them... But then scaling is where it really starts getting challenging. I know that's something you spend a lot of time thinking about. So what would you say is the first step in scaling performance marketing?
Whenever we're thinking about scaling for performance marketing, let's say, we have to have some targets in place, that's the very first thing I think about. How are we actually measuring what we're scaling? So let's say we're going to be starting working with a new app. The very first thing we'll always do is start running app install campaigns, first of all. So very top of funnel, get people to download, figure out what messages are working, what creatives are working with that, so that people can start to come through. The algorithms can start to learn. We can start to learn as well. Our human intelligence can start to learn which kind of things are performing well.
And then as we go further and we're able to spend more and more, we start moving farther down the funnel. So from install to signup, from sign up to trial, and then to some custom events. And as we start to unlock more budget by bringing back more money into that with our subscription product, we're able to get better results. Especially since the iOS changes, which we're all very familiar with, I'm sure, you need to have a bit of a different strategy. You have to get a certain number of installs per day to see post-install events. So you need to be spending at a certain level to be able to optimize for that and so on.
So the very first thing I'd be looking for is, what are my targets? What are my optimization events I'm aiming for? If I'm already optimizing for trial, is there something else that can be a custom event like trial and two content views or trial and something that we can start looking at that will be a really clear indicator for success and for someone going on to purchase and succeed. I actually recently downloaded Duolingo, I mentioned to you before, and I actually got my ATT popup after I'd completed a few lessons, and I was like, "That's really interesting that they're giving me an ATT popup at this point. Like, you don't care about anything before that, then. Okay, I see. I see."
I turned off the ability for apps to even ask. I hadn't thought about that kind of being in the industry. I need to allow apps to ask me so I can see at that stage.
Yeah, it's really fascinating.
Because done a lot of testing around it, of course, and we were originally going to put the ATT popup at a place that we thought, "Oh, this will interfere with signup or this will interfere with trial. Fewer people will do those actions," but actually, the place that it had the least impact on anything, including conversion to paying, including hard activation rate, was on first launch. So as soon as someone opens the app for the first time, putting it straight there and just having them press and moving on from there, which was shocking, but now we're rolling out everywhere.
The Duolingo thing is interesting, because in a way, it's those users who do finish that second or third lesson that are the ones you most want to find audiences like. And so maybe that's the kind of theory behind that. It's like, "We don't care about tracking the users who fall off before that second lesson. We care about tracking the users who stick around." Yeah, that's fascinating. But then it's also fascinating that on launch wasn't most successful.
But speaking of tracking that, and you've talked already about custom events that you might want to feed back and measuring success and all that stuff... What are your favorite tools for actually measuring those things, and how do you set up those flows to understand what is being successful? When should you ask? Is the ask deterring from signup and onboarding completion and things like that? What do you actually use to do that?
Firstly, we'd be thinking about what is our A/B testing tool and how we're doing our tests, to see how we're implementing that infrastructure. So we often build that ourselves with our developers, build in our own systems, and then measure in Firebase or measure in Mixpanel or something like that. But there are many other tools, and actually, tools like Mixpanel have their own A/B testing solutions now, which is really cool to see that grow over the last few years.
But then on top of that, if we're thinking about where are people coming in from and so on, and we're trying to measure all the way up to the top, I'm usually using one of three MMPs. So I'm usually using AppsFlyer, Adjust, or Singular, and then tracking down into exactly what people are doing, how those flows are working with different cohorts as well. Interestingly, you see very different results. Not surprising, of course, with different cohorts, but yeah.
And then we build funnels, of course. So within any kind of analytics tool, if you're using Amplitude or a mixed funnel or something like that, you can build funnels. So you can build step funnels. So this many people signed up, this many people went through Journey A or Journey B, and then this is the actions that happened afterwards, so that we can see what is the incremental uplift of having or not having this feature. So really cool.
When I talk to individual app operators or people working in bigger apps, they have one specific tool, but working with so many people, you end up working with all the tools, so you have to rely on building out the principles and the frameworks, and then just figuring out how to actually make it work in any individual tool. So like, the ATT prompt, you would set that up as just an A/B test, whether it's Firebase or Mixpanel or Optimizely or in-house testing tool. We want to know when is the optimal place to put in the ATT prompt, and then just run a bunch of tests and see how it performs.
Exactly. Yeah, run either A/B/C tests. They don't always need to be A/B, just a control and one option at a time. As long as you're getting enough traffic through, you can run your control against a few different options, and it's something that we do in the product and something we also do from an advertising perspective. We always like to have the control with multiple options, see which ones are getting the best uplift and then start optimizing around that specific one.
We kind of started talking about scaling ad performance and digressed into a lot of different things, but specifically on scaling, how are you experimenting with creatives and trying different things these days, especially in the post-ATT world? What's working? Like, spray and pray and are you getting good measurement, and then what kind of volume do you need to really understand what's going on, versus just low-volume stuff where you can't really understand what's working, what's not?
I think whenever we're working with companies, we say that we only really start with a minimum of 6K budget at the moment. That's the very, very minimum, based on what an average cost per install would be. And this is only to optimize on installs right from the very beginning. And so the reason I say that is because to optimize any campaign or any ad set, you need to get 50 conversions a week in your first seven days of launching. So that's your first hurdle. Can I get my 50 conversions in a week? Firstly, so am I able to hit that point? So if you're going to get 50 conversions in a week divided by seven, you have to get seven per day.
So imagine, to begin with, a cost per install is going to cost you around five pounds, $5, whilst you're getting started. So that's what we say is the minimum entry point that we then try and come down from. So that then allows you for a few different test variants and then one prospecting campaign. So the way that we're doing all of our experiments, really, are having tests with one control and then multiple variants, which are one element different from each of them.
So if we're testing a messaging test, we will have the exact same design but with four different messages on there against a control. Which one of these messages will be our control? Once we then find out which message works best, we'll take that one message and put it into four different concepts, like four different design concepts. So that could be like a SMS conversation, an iMessage conversation. That could be an iOS notes fake conversation, that could be a UGC style, that could be many other different things with this one message to see if, does this message actually hold up on its own in different formats?
We then see which format works best, and start exploring that a bit more with different iterations, with messaging iterations within that as well. So this is really how we get started. And as we're starting to scale, we're repeating this process over and over and over and over and over again until we start finding a lot of different creative concepts that are working and we're upgrading, we're putting these into our always-on campaigns, and then leaving them, trying to touch them as little as possible. We're trying to keep our hands off those ones, and that's half of the job, not trying to interfere too much with those.
So that's really the baseline, and as we start to see success there and we're starting to see success on the app install format, optimizing only for app installs, then we would think about, "Okay, now we've got this portfolio, we have creatives that's working, now we're going to start optimizing for something else." So we would duplicate that campaign and then optimize it for registration, optimize it for trial, optimize it for our custom event that we spoke about earlier, see which one of those performs better.
Are we even able to get to the point where we're able to optimize for post-install events? Because you need to be getting a certain amount of installs per day then. So you change from 50 in a week to 120 in a day that you need to get for your one campaign. So it's a big step increase there, 50 in a week to 120 in a day. So when you're doing that, you need to be ready to increase your budgets dramatically without knowing. So we try and do this very, very gradually with a lot of control, with a lot of experimentation, and a lot of analysis going into it, because we need to make sure that this budget increase is sustainable as well. It's fine to put it up for a test, but can we sustain this budget level?
Because the way it works with the SKI network is you might have this one campaign set up in Meta, and it's just got one campaign ID in meta, but actually in the SKI network, it has like 10 campaign IDs. And so inside, then, the SKI network, you need each of your campaign IDs to get a certain amount of installs to get the post backs. So then if you are going even underneath that average, some of the post backs will be empty, then you'll be starting to miss data and so on.
So even though I'm saying you have to reach 120 or so, it's actually making sure that that is the very, very minimum to be able to get post backs. So unless you're hitting that, you know you're not going to be seeing data visibility. And so that's when we're starting to make sure that we're also looking in our MMP, in our mobile measurement partner, so in AppsFlyer or Adjust or Singular or something like this. Because at the moment with some of our companies, we're seeing 60% visibility or so in Facebook, in Meta, versus what we see in our MMP. So we actually look at that as our source of truth. Are we seeing the results that we need to see in our MMP? We're judging our success there and then scaling based on that too.
Yeah. How do you talk to your customers about this scaling process? So early on I imagine, especially with that $6,000 a month, I would imagine a lot of that's pretty investigatory. And are they seeing a return on that, or are you expecting that to be an investment where they're not seeing much of a return, or maybe a 50% return or something? But the idea is, "Okay, you're going to invest, you're going to get some of it back, but we're going to learn and we're going to get closer and closer to that return on ad spend that you're looking for." How do you frame that conversation? And I guess it's different for each app and different apps have different levels of success during this exploration phase.
Absolutely, yeah. I mean, firstly, every app is different. As you say, everyone needs different levels of success, but also everyone has a slightly different strategy. So for some of our companies, at the moment, they're completely free products. So we're not running on iOS, we're running on Android, because we're not trying to make money from our customers. It's completely fine to run on Android, and since we're still safe there, for the time being, from Sandbox, we're still running there. We're getting our learnings there, because when you're running on Android at the moment, there aren't these kind of rules that I've just been talking about like we have for iOS. So there isn't a minimum number of installs that you need to get a day. You still need to get 50 a week to optimize if you're on Meta or 10 per day if you're on Google, for example, but you don't have to get this 120 plus installs per day.
So if you have a lower budget, it can actually be good to test on a platform like that. But when I'm thinking about how are we talking to our partners about this, we definitely see a big part of what we're doing as education. So we spend a lot of time going over and over this with our partners and making sure that they are as clued-up as we are on it, because we want them to have a full understanding when we're talking about this stuff with them.
Because we're all very technical people for non-developers. For us, it's really important for them to have an understanding of SKI network, of all of the IDFA changes, of what are the rules around this, so that then when they're looking at the numbers, they are informed. We want them to look at this and say, "Okay, this number of trials might not be accurate in Meta because we actually didn't get enough installs here," and so on. So it's really important for them to know as well. It's their business after all.
And when you do have a subscription, I mean, that's what we mainly focus on here on the Sub Club podcast, and so there is some element of hoping to get a return. Do you see just wildly different performance? And then what would you advise on the lower performance? So let's say you start scaling up your install ads, so you're optimizing for install because you don't have enough volume yet to optimize for deeper funnel, and you spend that 6,000 that month, and you make 3,000 or a couple of thousand or something. Do you immediately jump back into product, or do you look for ways to get that cost down, or other things?
Well, the very first thing that I'd be looking at is everything from a blended perspective. So did I get 3,000 from my paid, or did I get 3,000 overall? So if I'm getting 3000 overall from a blended perspective, I would be first looking at all of my funnel steps. So where are my biggest areas for opportunity? So this then goes into product.
So firstly, I'd be looking at everything from CPMs to clicks to how people are installing, how are people registering? Are people actually opening after the install, actually? Are they registering? Are they taking out the trial? And then the trials are paid conversion. So I would have kind of internal benchmarks of what these can be, what we should be aiming for here for subscription apps with trials and so on.
So then from there we would look at, okay, which areas is letting us down? Are CPMs really high? Is our click-through rate really low? If that was the case, or even if our click to install was low, I'd be looking at our ads definitely. Where can we improve here? Which ads are letting us down? Has something started to saturate? Are there any new directions that we should be testing? How are people engaging on these ads? And so on. And if everything's looking fine there, then I would be looking into the product.
So our people, whenever we're thinking about installs, we have this kind of true install, which is actually the number of people that went to the app store and clicked "Install". But then there was also your first app open install, which is what we track in most of our business intelligence tools or in our analytics tools. So actually, I was working with one product and we saw that only 70% of people were doing the first app open after the true install. So we were like, "Okay, this is interesting." Obviously then we have to zoom out, is this something on the app store page? Is this something on the ads? Is this something on how we're communicating?
What can we do here to push people further? Should we be retargeting people who've installed but not opened, and so on? So there's a few different things that we can do there, but then if people are actually opening your app, then I'd be looking at, what's our registration rate? How are people... Taking out the trial and going into the product itself to see where are, again, our areas for improvement, where our biggest opportunities, and then focusing there. And actually, that can mean sometimes pausing your ads till you fix problems.
It does seem like ultimately to get to profitability as a subscription app, you need some pain point you're solving, you need some core product market fit. And so I guess that's what you're saying is, if you start scaling up your ads and it is just not working at all, then maybe the first place to look is that core value prop, and that, do you have some level of product market fit? How do you think about that and how would you measure that? And then how would you start improving that?
So from a product perspective, I would be looking at how people are performing in terms of, when they're coming in, are they doing their first meaningful action? So that could be a signup, that could be some kind of order. Every product will be different, but a first meaningful action showing that they're interested in the product. And then we'll have a core action. Imagine you're a content product, you've got lots of different stories on there. So maybe a core action could be clicking on a story and reading it, or listening to a song on Spotify, or booking a night on Airbnb or something like this. So that's your core action. So there will be this set number of core actions that someone needs to do to go from being soft activated, so from being just starting their activation journey, they're still seeing what's what, they don't fully trust or they're not fully having the product embedded in their lives.
So we have to figure out how many of these that we should be doing. So imagine, I was working at one content product and we knew that someone had to listen to six stories to become hard activated. What we did, then, was we gave everybody 10 free stories to make sure that they would get to that point. So you don't just give six, just in case. Some people might be a bit more or less, because six is the average, and the average is never the truth. So we gave everyone 10 free stories and then waited to see what happened. Obviously we were offering a trial before that, but then we would see that people who completed actually six stories who hadn't taken out a trial would then take out a trial and convert 2X better to paid from that trial than people who hadn't, for example.
So these are the sort of things that we'd be looking at. We'd be looking at, "Okay, where is this activation point?" Think about that six stories. How many times does someone need to do that core action? How many people are getting to that core action? Is 1% of people getting to that core action? Okay, then something is blocking them. What can we do to remove friction there?
So that probably isn't going to be removing trial from your onboarding, because we know that a lot of these trial events come right in their very first journey, but after someone dismisses the trial, what are we doing to make people move along that journey? How are we making it simple? Are we doing autoplay? Are we getting people to just tap in? Are we doing tool tips? Are we giving people rewards? How are we gamifying this initial experience, even if we're not a game, to get people to move along there?
We'd be looking really closely into that area, how many people are getting there, but then also looking at what are all of the steps between that? Because let's say for the six stories, lots of people listen to one, so getting people to listen to one wasn't a problem. It was getting people to listen to the second one. So how do we get people to go onto that second one? And so on. So this is what we'd be doing.
I think a lot of subscription app folks too often think of the trial start as the core action, but it's interesting how you were so much more focused on that core value prop, is that they need to interact with something that's going to give them value. Starting a free trial is not giving them value. So whether they start the free trial and listen to those six stories, or whether they listen to the six stories and then start the free trial, the goal is to get them to listen to those six stories so they're experiencing the value that you're delivering, so then they stick around through the free trial.
And that was an interesting stat that, in that case, it was 2X performance on the free trial by getting them to listen to the stories before they even start the free trial. That's fascinating.
Absolutely. I would definitely caveat that with, we tried experimenting with moving the placement of the trial, and there was no way we would've removed it after the results we saw from the initial onboarding, because the number of people that go through it on that initial onboarding, even if they're canceling, is so much higher than waiting beyond that, really. So we always give the opportunity to take out a trial earlier on. We don't force it. It's not like a hard, hard sell the whole way through, but we give the opportunity for trial, and then give them more opportunities later after they're a bit warmer.
Once you've gotten folks experiencing that core value and started to understand some level of product market fit and moving forward with people getting value out of the app, how do you then track your ROI and build financial models around this to actually then profitably do it?
Because it's one thing to connect people with the value. We're all in this to make money, to a certain extent. I mean, even if there's greater purpose in the app, you can't spend $100,000 dollars on ads and make 20 and continue doing that. So yeah, how do you build out the ROI and the financial models to actually figure out what's going to work?
Whenever you're starting and you're an early product, a lot of what you're going to be doing is just finger in the air until you start getting your initial results in, firstly. So you're going to be looking at what are the industry benchmarks, you're going to be looking at... Actually, in App Store Connect now, you can see what are the benchmarks in your industry, which is really cool to see. So I'd be looking at those if I'm just building my product right from the very beginning. So here are my benchmarks, these are what my expected results will be.
And then I start seeing what the truth is. And when we start seeing what the truth is, then we go and we start adjusting. And this is also how we prioritize the areas that we need to focus on, because we might be seeing that there's this big drop from install to signup, and by having a slight increase here, we will have a much bigger increase in our bottom of funnel events than something that's a bit further down the funnel.
If we increase our click to install by one percentage point, that will then have a much larger trickle-down effect in the later stage funnel than increasing something like trial to paid or something like that, because there's going to be so many more people coming in right at the top of the funnel. So that's something to bear in mind. But when we're thinking about how our hard activation rates and so on are factoring into our larger financial models, people who are hard active and the people who are reaching, let's say, these six tracks, or continuing on from that, the reason that we are talking about this hard activation rate is that people who get to the six have a much lower drop-off. So people who get to six, like 98% of people who listen to six, listen to seven. 98% of people who listen to seven, listen to eight.
And it was like this, very small drop-offs from event to event. And so people who are actually active are going to stick around much, much longer. People who are staying around, and when we're thinking about activation, everyone has an average. There is always this average. I worked with one other product where people would go to a bar and then they'd get a free drink. That was how the app worked. That was it. And it was an average of 21 days between someone going to the bar. But you, David, might go to the bar every 35 days, and I might go every 18 days. So for me, if I haven't been to the bar until my 19th day, if I haven't been on day 19, I've started to churn. So everyone has their own individual life cycle, frequency life cycle. And for you, if you haven't been on day 29, you're still active, because your average is 35 days.
So when we're thinking about this, this is how our hard activation pool grows. We're wanting to obviously grow the number of people who are becoming soft activated, so the number of installs, the number of true acquisitions that we're getting, but then we also want to grow this hard activation rate, so keeping people active within their lifecycle, within their user frequency.
So this is true of free products as much as it is of subscription products, and with subscription products, we're looking at, when we're thinking about having our North Star metric, for example, we want to have obviously some kind of revenue-associated event in there. So our North Star metric wouldn't just be revenue, but it would be something like weekly active subscribers, because we have a cadence in there that someone is coming and doing this event, they're coming back and doing it weekly, or maybe it's monthly or maybe it's daily or maybe it's fortnightly, every two weeks.
So cadence, action, revenue. Cadence, action, revenue. So weekly active, or this could be listening or this could be reading, or this could be something like this, and then subscribers, so that you've got this revenue event in there. So the reason that we do this is so that we're representing our customers as well as ourselves and our company success, and our perceived company success. Then, once we have got all of our North Star metric and all of our metrics in place and we know exactly how we're measuring success, then we start thinking about our ROI, our ROAS. What is our lifetime value of the people that are coming in?
So everything that we're always doing should be with these things in mind. And this is why when we're thinking about what are the areas for the biggest impact, these are the areas for the biggest impact on our North Star metric, and therefore on our revenue, on our investment for the ads that we're doing.
So for that, we're going to be looking at things again from a blended perspective, because your ads are also going to have this halo effect. So once you're running an ad, you see my ad, David, eventually, maybe if you like the product, you're going to tell someone. And so even though that person didn't see the ad, you saw the ad, and so they actually came through the ad indirectly. So there's this halo effect that we get by running ads.
Or also, another example is you've seen an ad a few times and then you've got some kind of message in your head, and then something happens externally. You saw an ad on Monday, and then the following Monday your friend mentioned something and you're like, "Oh, I've remembered this product." Okay, then you go to the app store, you search it, so there's this halo effect on your ads, which might not necessarily be tracked.
So this is why it's very important to look at things from a blended perspective. And when I say blended, this means looking at the spend that you're doing on all of your paid channels. So this is the way to start looking at this. So how much are you spending on Meta? How much are you spending on TikTok, Google? Add all of that together, and then look at how many people have done these kind of actions overall.
So how many installs have you got overall from every channel, including organic? How many trials did you get overall from every channel, including organic? And then how many of these converted to paying? So we take that total sum of money that we've spent and then divide it by the blended actions that we've got, so total number of trials, total number of purchases, so that we can see how these numbers are increasing over time, but how our costs are decreasing, of course. So this will help us measure our halo as well, and look at the impact on blended.
How do you think about optimizing that performance thing? Because if you're looking at blended, are you simultaneously still looking at TikTok specifically in the MMP, looking at Facebook specifically, and are you running incrementality testing where, "Okay, let's spend less on Facebook this week and see how it impacts all of this, the blended numbers. Let's spend more on TikTok this week and see how those blended numbers shift." Is that how you're thinking about it?
Absolutely, yeah. So we start every channel in isolation, really. So we're looking at one channel at a time to begin with. We wouldn't launch five channels at once. We'd launch one, see how this performs, get baselines, and then start introducing other channels once we've started to master one of them and we're getting good results with one.
So we would then, with each of the channels as well, with Meta, with TikTok, with Google, with Pinterest, you can use these as direct response channels, or you can use these as awareness channels. So we do do incrementality testing with all of the channels on our direct response level and also on our brand awareness level. And we don't spend a lot on brand awareness, but what we do do is do engagement campaigns. So we look at boosting some of the Instagram posts, boosting some of the TikTok posts, and the reason that we do this is because we are getting such cheap CPMs, cost per impressions, for these.
So a direct response ad, I might be paying somewhere between $7 and $12 CPM, maybe a bit lower, and for the same, similar people to see my awareness ad, I'll be paying maybe $1 to $2 for 1,000 people to see them. So what we try and do is we hit people with our awareness ads, hit them with some of our boosts from our Instagram or Facebook or TikTok, get them to see things a few times as well, and then we're hitting them also with our direct response at the same time. So our more expensive impressions, but these are the ones that are then converting and getting the results.
So we are looking at our spend holistically, we're grouping our awareness and our direct response spend and looking at how our overall costs from that perspective have come down. And then we're looking at it on a channel by channel basis. And as we start to see some channels' costs going up, we're constantly routing money to other channels. So at the moment, with one of our companies, for example, our Meta costs have gone up dramatically over the summer, but we're still more than 2X profitable on Google, so just funneling money over there at the moment. And then at some point there is just standard variance throughout the year, so we might then end up putting some money back.
But ideally what we're going to do is then start reaching an equilibrium with that company across lots of different channels with spend so that we're not just having eggs in one basket. We don't want to just have 95% of budget in one place and 5% somewhere. It's good to have a big percentage everywhere, so that then if things are going up and down on one channel, at least everything else is balancing it out. And this has been performing pretty well.
Yeah, that's interesting. And you talked about seasonality there. I was just thinking, as we record, it's early October, this is going to come out mid-October. How do you think about Black Friday, the Christmas bubble? It seems like pretty much all advertising starts getting more expensive from, say, Halloween through the maybe second week of the year, once all the New Year's resolutions have been made and all the fitness apps have gone through their budget. How do you think about that seasonality? Do you sometimes pull back? How do you wade through the more expensive campaigns during that time?
Yeah, it's an interesting one to think about at this time of the year, because every year is the same. I've been advertising for 12 years now, and every year it's been the exact same pattern, which is from literally November 1st until literally November 30th, the performance is just terrible.
And I tell all of our companies every year, even when I've been full-time in-house, November will be very expensive. We should be thinking more about brand awareness and just putting a bit more budget into brand awareness in November. And then, without a doubt, everyone always freaks out in November. Except me, because I know it's going to happen, not that I'm a psychic, but I'm just looking at data that November, "Oh, costs are going up, so we should be putting more money into these boosts or into awareness campaigns where we're getting lots of impressions, because we know we're not going to get the sales that month, because everyone's really excited to buy their physical presents, physical gifts for themselves, physical gifts for their family members, partners." This is where e-comm explodes.
And then literally, November 30th, the costs start to go down again. And this has happened every single year, and it's crazy. December 1st will always be a really good day for us, and December will continue. And actually when I've spoken to the Meta teams about this, they call this period, and we call this period, Q5, so December 15th until the beginning of January is like Q5 for spend, because especially from December 15th, this is around when the post stops. Your lost posting opportunities, so e-comm spending reduces dramatically. So it's a really good opportunity around them for this digital product advertising.
And then going into New Year, if your product can be beneficial for people in any way, you go into this New Year messaging, see really great results there. If you can do a sale, that's amazing. With revenue cap, you can implement a sale in your product. You can even run these through your website. There's many different ways. And another thing to leverage at that time of year too is gifting, which a lot of people make a lot of money from.
I know, you're actually working with one of my colleagues on something around gifting, so we'll tease that. So look for Hannah, a collaboration. I think it's a blog post coming up on gifting, but I wanted to get back to something you said there.
So you've actually seen costs decrease December 1st, and then even on the 15th they decrease even more. So starting December 15th is when, as a digital product, because e-commerce spend has dropped dramatically, once people can't get it shipped for Christmas, that's actually a great opportunity, is what you're saying?
Absolutely it is, yeah. All the way until the end of January.
Gotcha. I had it a little backward in my mind, not having spent myself... This is why I like talking to people like you. You've worked with 250 companies, you've been buying ads for 12 years. I've talked to a lot of folks over the years, and I get certain impressions in my mind, and that was one of them. But it's good to know that November is the worst, December starts improving, and then huge opportunity around New Year's.
It kind of makes sense. I mean, you have quarterly budgets that run out. You have monthly budgets. It was Ariel. I was actually having dinner with him in New York recently, and he saw this one app where they would exhaust their spend by noon. They just had a spend cap of every day, "We're going to spend 10,000" or whatever it was, and the keywords would be super expensive on app store search ads, and then by about noon, they would have maxed out all their spending. And so he was talking about with that company, they figured out, "Okay, we don't spend until after noon, and then after that we ramp up our spend and we'll get way better spend." So it's kind of interesting. So seasonality wise, be really careful in November and don't try and push it, and then in December, expect that to recover, and then yeah.
Yeah, for subscription products. Of course, with different apps, it's a different story. If you've got an e-comm app, it's going to be a bit different. But yeah, for subscription, digital-only product, this is what I've seen every single year. Without a doubt it's been the same.
Well, I think it's a great place to wrap up, seasonality, and coming out in October is perfect timing to have discussed that. Get ready for November being challenging.
But as we wrap up, anything else you wanted to share? I know you're ramping up at Aperture and taking on new clients, so anything you want to share there?
Yeah, just firstly, thank you so much for having me. It's been such a pleasure talking to you about this stuff. And yeah, I mean, at Aperture, just so everyone knows a little bit about it, it's a full-service growth agency, and we work with companies that are making positive change in the world. So we're working at the moment with apps that, for example, help women and nonbinary people with financial education. We're working with apps that help parents build better and secure relationships with their children or deal with postpartum depression. We're working with apps that help with food waste, reducing food waste, not increasing, saving trees around the world, all kinds of different things like this, therapy products that have been growing dramatically well.
So we love working with any kind of products like this, and we're also growing the team. So of course, if anyone's interested in working on a selection of products that are helping the world and you're a bit sick of selling your soul to the dark side sometimes, then you can reach out to me on LinkedIn or just check us out generally. But yeah. Thank you, David.
Yeah, awesome. Thanks so much for joining me. This was a fascinating conversation. I love being able to go so deep. We don't often go quite so deep on the techniques and tactics, and this is how you actually look at your ad spend and stuff like that. So thanks for being willing to go super deep and nerd out on this stuff.
Of course. I love it. Thank you, and have a beautiful day.
Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by chat.subclub.com to join our private community.