On the podcast: using signal engineering to optimize ad spend, how AI is changing creative testing, and why most people should avoid app2web… for now..
Top Takeaways:
🧠 AI’s hidden edge is in analysis — Most teams use AI to crank out creatives but ignore its potential for testing and pattern detection.
🔍 Clean signals, better ads — Ad platforms can only optimize to the quality of the events you send them.
⚖️ Keep exploring while you exploit — Don’t rely on one winning ad concept for too long. Keep scaling what works, but test new creative ideas in parallel to avoid fatigue.
🌐 App-to-web isn’t for everyone — Without brand trust, resources, and deep flow optimization, it’s more likely to hurt than help.
💳 Hybrid monetization is a late-stage lever — Mixing models can boost LTV, but complexity means it’s best after subscription optimization slows.
About Thomas Petit:
👨💻 Independent app growth consultant helping subscription apps like Lingokids, Deezer, and Mojo.
📈 Thomas is passionate about helping subscription apps optimize their ad spend and increase ROI through smarter testing.
💡 “The whole idea of signal engineering and optimization of the data that you're sending back is: send the network something better, and they're gonna do a better job. They are doing a better job — it's you who are not doing yours.
Follow us on X:
David Barnard - @drbarnard
Jacob Eiting - @jeiting
RevenueCat - @RevenueCat
SubClub - @SubClubHQ
Episode Highlights:
[1:21] Testing smarter: How AI may be changing the game for testing ads.
[13:09] Untangling the web: App-to-web can work for some, but it’s not a slam dunk.
[21:19] Hedge your bets: The benefits of moving away from subscription-only and embracing hybrid monetization strategies.
[26:50] Going global: When and why to consider experimenting with hybrid monetization outside the US.
[31:15] Signal vs. noise: The signal engineering framework for sending the most valuable user interaction data to ad platforms.
[44:47] Multi-platform: Optimizing your data and event mapping for multiple ad networks.
[53:01] Low-hanging fruit: Scoring easy wins with signal engineering.
[1:08:04] Hands-off: Why ad networks likely won’t (and maybe shouldn’t?) implement built-in signal engineering tools for app marketers.
[1:14:05] Going deep: Advanced signal engineering techniques.
[1:26:09] Volume vs. quality: Why sending fewer events to ad networks may actually yield better results.
David Barnard:
Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat. Thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.
Hello. I'm your host, David Barnard, and my guest today is Thomas Petit, an independent consultant focused on subscription app growth. Over the past decade, Thomas has worked with hundreds of clients and helped manage 9 digits in ad spend with a positive return. On the podcast, I talked with Thomas about using signal engineering to optimize ad spend, how AI is changing creative testing, and why most people should avoid app to web for now. Hey, Thomas, thanks so much for joining me on the podcast again.
Thomas Petit:
Hi, David. Always happy to come back. One of this time of the year, I look forward chatting to you and browsing some new topics around apps. So yeah, I'm happy to be here.
David Barnard:
Yeah. I think we're going to kind of make this an annual thing. Just the last 3 years in a row, I think we've done it in the late summer, early fall, so we'll just make this every summer kind of thing.
Thomas Petit:
All right, that's a date.
David Barnard:
The main thing I wanted to talk to you about today was signal engineering. We've done a webinar on it, you've talked about it quite a bit, people have been blogging about it. But before we get there, there are a few kind of hot topics that I thought people would really enjoy hearing from you about. So let's do those first, and then we'll dig into signal engineering. And maybe this is going to be a 2 or 3-hour podcast, so let's see how it goes. One of the things I wanted to talk to you about is AI and AI creative generation. There's been a lot of noise lately about, "Oh, Meta is just going to take all your assets and figure everything out and just 100% automate all your ads. At some point, in the next 18 months, it's going to generate human looking people, it's going to write your tag lines for you, it's going to just do everything. It's going to customize it to every individual, and it's just going to be this utopia of ads."
That's the take I've been hearing from a lot of folks. But I saw a tweet from you recently with a bit of a contrarian spin on that, and so I'd love to hear how you think about using AI creative generation just in the entire ad creation process, and what to watch out for and where you think things are headed.
Thomas Petit:
There's a lot of focus on using AI specifically for this, for producing creatives, and I think that's actually a mistake in the sense of it's hiding the fact that we can use AI for a number of other things that are not creative production. Of course, it's a natural because of the cost of production, but also of ideation as well and how we can make multiple variants much easier than before. It's so obvious that 95% of the discussion is there, but my first answer would be maybe we haven't done that much elsewhere. And I think especially, in creative analysis is where there's a bit of a gap. I'm seeing a couple of tools coming up in some people, but it's still a rare conversation. A lot of the focus is on creative production, and that's kind of expected in a way.
I think if we look back since GPT came out early on, obviously, it was mostly text. And then, we got to mid-journey in DALL-E. But static pictures at least for apps is a relatively small amount of the ads that are actually thrown out there. And text are important, especially Google and elsewhere. But for paid social, for Meta, for TikTok and for the gaming networks, text is a very small to insignificant part. Obviously, the acceleration has mostly been around the last 6, 12 months, which is where video creation became really possible. Until a year ago, we're in summer '25, until last summer, the quality of AI production for video was not good enough for ad creation. And then as everything AI, it sort of accelerated so massively that I don't think I know a single team was not using any AI tool for ad production, which doesn't mean that 100% of ads are AI generated yet. And I don't think we'll ever reach 100% unless one platform decides that there's no other choice.
But I don't think it's going to be it. I think it's going to be a bit of a mix and probably it's going to be the majority, but it doesn't mean it's going to be the entirety, especially around creation. You can use it not just to generate the picture itself, but also to generate the briefings in multiple variants and other uses. Within creative production, there's many layers of how you use AI. So it could be to create variant, it could be to find new ideas. And then sometimes you'd use several tools in a row, one to generate ideas and one to classify them and try them, and then later one to produce them. There's one that's very exciting. I saw the message last week and I thought, "Oh, crazy." I remember writing something about it 6 years ago, so pre-GPT, so that was kind of funny, which was like, "Oh."
It's the first time ever there's a tool that enables you to actually test a creative without spending a single dollar. There's basically this pool of artificial humans that are used, they're not humans, they're just AI generated, to classify if an ad has a chance to become a winner without to spend any money, which is the dream. Because on one side, we need to produce a lot of stuff to find out what works, and the stats out there. Roughly, you have to produce 50 creative to have one that is actually a winner, and this winner is going to take 90-plus percent of the ad spend. That's kind of a normal winner-takes-all mechanics that happens in ads. So obviously, because when you need 50 to find one, and sometimes it's more, teams are accelerating of creating 100 week, thousands every month to find a couple of winners.
That's the obvious use. But having something that tells you without having to do creative testing, which one has the most chance to win, is something that is still very nascent and very pioneer. And I don't think it's going to fully replace proper creative testing where you put a little bit of money behind it, but it's definitely one way I'm looking forward about acceleration in the next year. We're not there yet, but it's something I can see it's coming. In analysis as well, I think it's going to accelerate especially around patterns, which hooks have always works and which not. Then you can come up and it's sort of all entangled into accelerating because of these different layers. One accelerates the next one and amplify, let's say, the next one. It's all very exciting. As of Meta, and Mark Zuckerberg in particular, making the declaration that his goal is that there's no more assets at all, that it's all generated, I think was voluntarily provocative about the direction it's taking.
I don't think we're going to get there in the next 1 to 3 years, and certainly not 18 months, but it's certainly the direction it's taking. Google has done a lot of it early on with UAC and is still doing a lot of it. Meta has done some of it and is clearly now accelerating. And I can see why this declaration... Zuckerberg said it in a way to prepare people that it's coming. Not necessarily that this is the ultimate form that they talk about is what's going to be there in 18 months, but it's pretty logical in terms of personalizing parts of the creative. You could find, for example, a script of a creative, something that fairly resonates with a lot of your audience. But then if you magnify that the text can be customized, that the character in the ad can be customized and somebody would add somebody looking more like them or a little bit older, and then you can cater to even more audience, that makes a lot of sense.
I'm seeing this happening today in a half-manual way, which is like, "Oh, I've got something that wins. Let's produce 50 of them." One with an older guy and one with a black woman and one with 20 years old and one with a whatever, and you just try them. But I can see how, right now, it's more like a marketer's idea of, "Oh, I've got something that wins. Let's make variants out of it," into the platform automating more of this.
David Barnard:
Yeah.
Thomas Petit:
In the grand scheme of things, it does make sense that we take that direction. I don't think it's going to be ultimately the only way the creatives are produced either.
David Barnard:
Where do you see folks failing with this right now or what do you think the things to watch out for? You kind of alluded to it that these tools that kind of pick the winners ahead of time maybe shouldn't be relied on 100%. But what other things are you seeing that you think teams are doing or customers/clients that you work with have suggested, that you're like, "I don't think that's the right direction yet"?
Thomas Petit:
It's hard to pinpoint one in particular. One thing that is always a tricky balance, a tricky trade-off is when you find winners, obviously there's so much better than the rest that you want to double down on this. So you create a lot of variants of it. Ultimately, sometimes I look at some accounts and I'm like, "Okay, but all you've got there is one concept." You've got a few ads, but it's more of the same. And sooner or later, this is going to go down and you have to prepare a lot more. So it's kind of like instead of exploring so many more concepts and the majority will fail, the trick between, "How much do I double down on winners? But how much do I prepare? What's next?", doing something that has more likelihood to fail in the short term, but more likelihood to be required in the long term is very hard to balance.
Obviously, everybody's short-term looking. Especially in UA, we look at the results from yesterday and last quarter already doesn't exist anymore. So yeah, long-term preparation of what's next and audience expansion, especially when you spend at scale. If the account is relatively small, you can tap on the same concept over and over. And with a few variants, you can push it around months. But everything is accelerating even in ads. And I think this is especially visible in TikTok where the trends go so much faster around music, around the specific hooks, around the first seconds, around even concept of ads that are really accelerating. So you can't just double down on stuff, you just got to create. I think AI obviously enable us to go faster in like, "Oh, this works. I'm going to produce a hundred version of it. And even if in 2 weeks this concept is dead, I'm going to have used it in the meantime."
So even more useful for cases on TikTok where fatigue is much faster than, for example, on YouTube where you probably still want to craft a lot more less ads but go deeper. The other one... because your question was, "Where do you see teams failing?" Here, there's something that I don't want to accept, but I still have to, which is we don't understand why winning ads win. So we have this tendency... Eric Seufert is the extremist about it, about, "Don't even try to understand way it works. This is just a waste of time." I'm not that far, but in a way, I've seen teams fail a lot on... I don't know. You'd use an AI tool and say, "Oh look, those are my 200 last creatives, those three are the best. Everything else fails, tell me why." And here, you will get a result and you will get some patterns, but I don't think those patterns push you to the next one where you can actually replicate it so easily.
Like the very specific details of psychology of how people react, the kind of audience that is there is really hard to explain. So I'm still doing it and I still understand why people would do it because then, for iteration, it brings you to the next one, you want to learn from what you've done to keep progressing, and it's natural and sometimes maybe it helps. But honestly, it's a little bit of overstated in the sense it is very hard to understand why winning ads win. Our human criteria is not very good, but actually the LLMs, they're not very good either. They could spot a couple patterns and all, but it's not going to make the next winning ad that much easier either. It's sad, but in a way, it's more like, "Okay. Let's mass produce and see what sticks," rather than, "Let's try to really explain why it works." Even ads from competitors, you'd be trying to understand why they work and whether you're doing it with a team and brainstorm or you're doing it through AI, or probably the best of doing both. Still, the outcome is slim and pretty bad.
David Barnard:
Well, the next question I want to ask is what tools are you currently seeing success with. But that'll turn this into a 5-hour podcast. So I was thinking as you were responding to that last question that we should just do a webinar, like you, maybe Nathan Hudson, maybe Marcus Burke, get a few people who are really into this and do a whole webinar specifically on what tools they're working at. I think that's going to be changing so quickly that maybe doing it on podcast isn't as good as just doing a webinar and some blog posts that we can update over time. But the next big topic I wanted to bring up was app to web, the Epic lawsuit, the injunction. If you're listening to this podcast, surely you know, so we're not going to go over it all. But what are you seeing apps do currently, and are you seeing things work in sending people from the app to the web?
And then as a precursor, you've been on vacation so you probably hadn't had a chance to listen, but the podcast episode just prior to this one, I talked to Zumba and they are seeing success sending people to the web, and they've been doing a lot of experimentation. And then as I've shared on other episodes, I'm also hearing from a ton of folks that they're scared to do it because they don't want Apple to stop featuring them. They don't want to get in a bad position with Apple. So there seems to be a lot of variants going on right now with whether you should even try sending people out of the app to the web, who you should send out of the app to the web, what experiments you should do. So give us a quick take on August 2025, where are at with app to web and where do you see things going.
Thomas Petit:
Sure. If I were to super simplify, if you've got a big brand or if you've got something that people already want from the get-go, it's much more likely than app to web is going to work. So if you're Spotify, it's obvious. Even Zumba, I think the app is pretty recent, but it's the official app of Zumba and people already know what it is. There is a trust factor in there. If you're an indie developer and you're launching a brandless app last month, it's very likely that app to web is a fool's errand in the sense that, one, you can't do it all because you've got a small team or it's just you. Two, it's less likely to work because you don't have this recognition, this brand, and people are less likely to go. And three, the analysis is actually quite tricky, so you need minimum resources to execute this. Minimum resources not just in designing a great flow, which is like a requirement, because a poor flow will make it not work where it could have, but actually analyzing properly.
And comparing those, the two is not as easy as it looks, like different work is different, renewal work is different, free trial work is very different, and you need to deal with fraud. The amount of small detail... And one thing that is really interesting from the folks for whom it does work to send people outside of the app is that typically they don't sell the same plans. What work on an app-to-web flow is much closer to what you see in the web-to-app flow, like where the first touch point is on the web and then you charge and then only they go to the app, than what you see through IAPs. So if you port the exact same flow and your analysis level is not deep enough to understand all the small details that are going to change, you could actually be misfiring and believe you're doing something great when it's not or vice versa, or actually missing out on a... It's not as easy as it looks.
So my big advice is that if you are big and you have the resource and you've got the brand, sure, go try it right now. If you're very small and it's just you, don't even try, it's not worth it. You're only paying 50% and the complexity. You're probably better off investing your efforts elsewhere. But obviously, the majority of people will land somewhere in between. At which time do we make the choice to actually experiment with this? Curiously, the teams I worked the closest with, they decided not to even test. I was like, "Why?" They were like, "Because prioritization is saying no to great ideas, no is saying no to stupid ideas." We can't do it all, and we've got the roadmap for the 3, 6 months where we've got very big projects going on. We're going to wait for the dust to settle because then there was the other appeal, and then we didn't know how Apple would react. Now, I think the dust starts to settle and we see more or less other examples. And Apple might change things later, but for now, they haven't taken massive retaliation stuff.
Sometimes, there's a first mover advantage. And for example, the ones who were advertising on TikTok early, obviously it was a huge advantage. I'd rather have people from teams that experiment a lot do it, and then listen to what they do and shortcut to what they think is working because there's a lot to test there. I mean, the different flows, the pricing, how do you prioritize the Apple payment versus not and so on. There's a lot of factors. So the teams I'm working with, they're like, "Yeah, we're looking at it. We're talking to people about it, but we're going to push it into the next step of the roadmap where we think we can execute faster then. And we also have a little bit more guarantee about Apple wants or doesn't want." So I had a couple of folks testing and actually, in one case, it's not a big app and it does make a difference for them. So maybe even it against my own advice, they went for it and it does work for them.
They just crossed the small business program, so now they're paying 30%. So for them, obviously, it was a little bit below 1 million... When you're paying already 15%, I don't think anybody should really try it unless you are very web-based from the beginning, like you had a business on the web and you already have a lot of tooling and a lot of experience and you can go. But otherwise, it's for later phases. All in all, it's great that there is the option. I am very happy that we can do more. Some people are going to go IAP only, some people are going to go web payment only, and the most sophisticated folks will be able to determine which user, which flow, which moment, which price is going to work, which obviously requires a lot more work and is not as easy as it looks.
I think the only lie here is to tell people that, "Oh, it's easy. Send your people over there, and you save 30% of fee," which is like a complete lie. But all the rest, I'm happy the option exists and I'm happy to see different folks. And I've seen experiments in both sides, so it was really curious and trying to understand, "Hmm, why did it work in this field and not in that field?" Even regardless of the brand power that you have, the brand awareness, the vertical also plays a big role. In some verticals where the purchase is a lot more instinctive and where... Yeah, the IAP is great in this case. So there are verticals and verticals, there's brand and brand, there's your... Yeah, I think it's great to have the options. But not probably for in this is a bad idea, and for Spotify is the best news ever.
David Barnard:
Yeah. No, I think that's a great summary and a good kind of take for folks right now, if you have the resources, you have the team. I thought that point was really great that if you're going to do it, you can't really half-ass it. You really need to think about it as an initiative that, "We're going to send people to the web, but then we're going to iterate on that flow. We're going to experiment, we're going to add steps, we're going to remove steps." And that was a conversation with Zumba that we had. So for folks listening to podcasts who hadn't listened to that episode, if you're curious to get a deep dive, they really put a lot of resources in it. And to Thomas's point, they've been selling on the web for decades. They started 20-something, 24 years ago selling VHS tapes on infomercials.
They already had all the infrastructure, they had the teams, they had everything, and they had the ability as a team to dedicate those resources. And then they did a lot of testing, and one of the things that worked for them was actually doing package selection inside the app and only sending to the web just for the payment piece. So the person had already selected the package, made a commitment, and then the web part is only payment, it's only checkout and it's defaulted to Apple Pay. And that's where they saw the big wins start to happen. So if you are going to try, that's probably one of the first experiments to do is to do package selection in the app and then make the web just the checkout and then make that as quick, seamless, least confusing, no options, like you just pay and then get back to the app. But I think that was a great summary, so appreciate your thoughts there.
The last thing before we get to signal engineering is hybrid monetization. This is something we've been talking about for years, but I feel like it's still for non-game apps and then for... There's certain categories where hybrid monetization has worked and there are some kind of best practices starting to be established, but I feel like it's going to be one of those trends that's going to continue to accelerate. And AI is a good example. To be honest, I'm so frustrated with Claude. I use Claude a ton in my day-to-day work now, mostly just like a brainstorming partner and get summaries of podcasts I listen to and things like that.
Lately, I've been hitting their usage limits and no... I don't get just a freaking button, give me 20 more usages for 5 bucks or whatever. Those kind of things in especially for AI apps or where you are resource constrained, having some kind of way to get past those limitations. I think it's just an obvious thing for a lot of these companies to start doing, but I think there's so many other applications. So what are you seeing work, and where do you see things continuing to go with figuring out hybrid monetization in apps?
Thomas Petit:
I agree, it's been pretty slow. I've been advocating for not doing only subscription for quite a while. And most of the apps I work with, it's only subscription and that's it. I could see a few success here and there for very different cases. Some with ads for the people who don't subscribe. But when you retain, a lot of subscription apps that just don't retain their free quota all day. Their fake freemium is you pay or you're going to churn anyway. So obviously, they're for specific cases. IAP, mostly for app sales and so on. And then a lot of e-coms and affiliate sales and so on, which makes sense for very specific cases as well, but not for everybody. So I think you have to pick up the model. But overall, I'm looking back at the last 2 years and I'm a little bit surprised that it's been so slow. And I think-
Thomas Petit:
[inaudible 00:24:00] the last two years, and I'm a little bit surprised that it's been so slow, and I think one of the reason has been the complexity around it. Same as the app to web stuff, it can't really be done easily. There's no obvious way of, oh, I'm going to put ads in the app and it's going to increase my conversion, or I'm going to put IEPs on top of the subscription. No, it's really tricky and I think that's the reason it's been so slow. And actually all the apps that have some kind of AI usage are finally accelerating this trend because they've got no other choice. And it's true that the big models like Claude and ChatGPT and so on, they mostly decided to go for tiered subscriptions. So I don't know, ChatGPT, you pay $20 a month and if you did [inaudible 00:24:46] well, you start paying 200 a month or whatever.
But for a lot of apps that are usage based, this is a model, but actually the IEP makes a lot of sense where you can buy credits on top. I'm working on a project right now where they have both tiered subscription and credits on top so that no user face the frustration that you just had that maybe you didn't want to fully upgrade to the super pro package. But then for one use case, you are working on something today and you would just have added five or 10 bucks of credits and finish it. And they've got a very smart team of several people that have been thinking through it for months. And it's really hard to design well to avoid cannibalization, to not frustrate users, to make something that is fair in terms of pricing for users, but also for the developer. It's very tricky.
So I think it's mostly down to this complexity that it's been fairly small that it's a natural case for AI just because there's costs. So obviously you need to find a way that you don't lose money. But the rest has been slow because of complexity, but also because it's sort of the last percent of grind that you get. There's so many lower hanging fruits around your subscription and I'm glad today there is a normal conversation to talk about pricing and paywall design and how you package in the free trial and this and that. And three, four years ago, this was still a little bit niche and it surprised me how people didn't realize that they could increase their own massively by iterating a lot.
Because there's been such an acceleration there and the overall level has gone up so fast, I think eventually it's still going to come, that hybrid monetization is going to come. It's just taking a very long time because of complexity. And for many people, that's okay. Nobody should jump into it. I think just at some point I've seen how the improvement curve of those experiments on pricing is slowing down and that's probably an indicator of maybe it's time to try something different. But if you're still making big wins on pricing and packaging just with subscription, keep going at it until it slows down.
David Barnard:
And funny enough, I also use ChatGPT quite a bit, but I'm on the pro plan with ChatGPT because I use deep research so much in ChatGPT. So ChatGPT I actually pay for out of my personal business because I use it so much more for personal stuff and for my side project business, even though I actually do use it a ton for RevenueCat work, but then Claude is actually billed through RevenueCat as a employee expense, so I probably should just ask for the higher tier, and that's kind of what they're counting on. To your point, if you're not already in a tiered subscription model, that's the first step before going to hybrid. But one of the places I do think, and I've heard this from Tammy Tau at Google, she spoke last year at App Growth Annual, she did a podcast mini-sode for the state of subscription apps report, and one of the things Google has been pushing really heavily because they see the numbers and they see it succeeding so well is working on hybrid monetization outside the US.
And so maybe that's another kind of angle where if things are going really well in the US and you still have low hanging fruit there, but you have the bandwidth, then experimenting with hybrid monetization outside the US may be what it takes to start seeing more wins outside the US of offering people non-recurring purchases, non-recurring subscriptions, offering these one-time purchases that make it a little easier to bite off, especially in the countries where subscriptions aren't as widely accepted and used. So that'd be another kind of vector to think about if whether or not you should be experimenting with hybrid monetization.
Thomas Petit:
It made me think differently about it. And once you hear it, yeah, it's pretty natural that where there's a very high, not purchase power but habit of paying apps, obviously the reward is probably further away, but then on Android where it's a little bit harder to convert and outside of the US and when you combine the two of it it's much harder. Obviously the conversion rate on Android international versus US IOS are extremely different. So it doesn't make sense to start there. But then it joins my point that it increased complexity in the sense of, okay, so on US IOS, I need to do the tier subscription and then for Latin America I need to add these payment options because there it's what works because they've got less credit cards, especially in Brazil, Argentina for example, where international credit cards are a bit limited.
And then for another region it's going to be something else that's going to work. So obviously for a big company and you can dedicate, you can zoom on an area and find out what works for them, that's great. But I have a lot of apps I work with that operates in 200 countries. And even though there is uplift to be found, the amount of effort that needs to be deployed to find all these individual uplift left and right is a huge work. So it really depends the type of organization you're in and the focus you're on. I know a lot of smaller folks that are like, yeah, I operate everywhere, but 95% of my effort is going to the US. I understand why for them hybrid monetization is something that's on the roadmap for 2031 or something.
David Barnard:
Yeah, no, it makes a ton of sense. Funny enough, I mean all three topics we've had so far around prioritization is that they're all good ideas, but you need to make sure you've got the resources in place and can put in the effort to actually make it work because it's not just going to be this automatic big win. And I mean, that may be another key too is if your core product isn't working in the US, you need to make it work first. It's not this magic unlock unless for some reason the app has taken off in some other country or there's a use case outside the US, it makes a ton of sense.
But other than that, it's not some magic unlock that if it's not working in the US, if you're not able to charge subscription on iOS and people aren't willing to pay, that it's some magic unlock that you're going to do this outside the US and see all the success. So it's like you've got to have that core product, the value, the activation, the onboarding flows and everything else pretty well dialed in and it should be working hopefully in the US before you even think about trying this around the world.
Thomas Petit:
It's very hard to pioneer anything really. It doesn't mean we shouldn't try, but these models, sometimes it's natural. You look at dating for example is a vertical where there was always a lot more hybrid monetization than elsewhere because there were ads and there were specific IAPs and so on. And probably if you're in this vertical, you should try earlier than others because there's a lot of success that has been found and users are more used to it, plus-
David Barnard:
And you can follow the patterns that have already been established.
Thomas Petit:
Exactly. You can follow some patterns. If nobody's doing it in your vertical, there might be a reason if you're the first one. I mean, you could create something new and find success, but it's going to cost so much more to find out that yeah, sometimes it's just not natural.
David Barnard:
Yeah. All right. Well, let's jump into Signal Engineering next. So you emailed me in the spring and you were like, hey David, I've been toying with this concept, and I don't even remember if you named it in that first email or if you came up with a name later, but I feel like I'm hearing the term signal engineering all over the place now. But I guess take me back to what is signal engineering and then... I mean, I know people have been doing versions of this for a long time, but what's the origin of you in the spring thinking that this was kind of the next hot topic or the next place that a user acquisition team should be putting some effort into?
Thomas Petit:
So what we're talking about here is the optimization of what we call the signal is the data we send back to a non-network to an advertising platform to optimize the ads from. So when you tell Facebook, I want installs, they're going to send installs, but then it's probably not going to get you very far. And beyond the install, it can be many things. So some people optimize for free trials, some people optimize for they want retention or whatever. And this has very much always existed, but in my opinion, under-looked. So it's nothing new, but I kind of felt that the discussion over the last couple of years was 90% creative and it's obvious that the creative is by far one of the biggest lover, if not the biggest lover in what's going to make ad spend or paid media work or not. But then it doesn't mean it's the only one.
So my intention here was I want to remind people that there are several factors at play and that if you're sending terrible data back to Facebook and Google, you can be great at creative, you're still going to have a problem. And the other realization was let's say there's three position of senior engineering. One is the data you're sending back is completely messed up, and that's actually a quite frequent case in small and in bigger organization. And honestly, if that's the case, you're probably better off fixing it before iterating on anything else. If there's something wrong with the signals that you're sending back, it's your top priority to at least fix it, understand what you're passing. You don't have to get super sophisticated, but at least you need to be sure of what you're sending back and having a quality of event that is passing correctly and so on.
It's a must have. It's not like a nice to experiment on. No, if it's broken, you've got to fix it. And then there's the majority of cases of people who don't have broken data but haven't thought much through it. Like, okay, let's optimize through the free trials and we'll see. Or have something set up. I have one particular case where they actually find, and it's somewhat not super sophisticated, but a little bit more elaborate scheme where they were sending back different signals when people were taking different types of subscriptions. So they have a mixed up of user type and weekly yearly and they just have a multiplier, say okay, if it's the weekly subscription from a normal user, send one and if it's a yearly subscription from a business user, send X5 back to the platform.
But then they hard coded it and they iterated for a year and a half on the pricing and so on. And when I came in, I realized that this hard coding that did make a lot of sense back when they did it was actually not representative of the value of users that were coming in. And then it became again a priority to fix in the sense of to get closer to the real value. And the last phase is when you get really sophisticated. So I went a little bit around, but if it's broken, you really need to fix it, find a place where you're comfortable with and dedicate your attention on creative and diversifying and other things.
But at some point as you grow, becoming a little bit more sophisticated about it and sending something better than just all your free trials combined probably can unlock value and a different... And most people look to unlock value everywhere. So it doesn't mean here you need to be super big and I have a super big brand to do it. I've spoken to a few people and I have one case that will be on the RevenueCat blog at some point when I finish to wrap up where a fairly young app has found tremendous uplift by changing the events they were sending and they were working in parallel creative, but then it accelerated what they could do in a way. So three state, broken, normal and sophisticated.
David Barnard:
Yeah. I wanted to step back to the broken part because I don't know, and maybe this is just my vantage point inside RevenueCat because we ended up talking to so many apps where their data is just completely broken, so my estimate is much higher just because that's who we talk to I guess more. Tell me a little bit more because I'm sure there's going to be people listening to the podcast where they're going to hear you say that and think, I don't feel like I can trust my data. So what do you mean by that? And then where do you see data being broken in ways, and you already kind of shared one, but what are the steps or what are the tools or what are the kind of basics that you should have in place for even monitoring to make sure that things don't break even when you think they're working? How do you figure out whether your data is good or not?
Thomas Petit:
It's always hard to prevent future breaks, but I know when somebody tells me like, hey, go check my other account, tell me what you think or I make a note, I don't make so many of them, but sometimes, maybe it's a friend, maybe it's a portfolio company, so I'd still do it. And they say, oh, here's the ad manager. And I say, I don't care about the ads manager. Show me the events manager. So Facebook call it events manager. In Google, it's called goals. That's the first place I look. I don't even look at the ad account and the first thing I'm going to check is does the amount of conversion, whatever this conversion is, so let's say it's going to be install and free trial and paid subscriptions to simplify, and I'm going to go check in the event manager, do the number that the platform reports being sent, match more or less what we have internally, and very often it's not the case.
I really don't care about a 5% discrepancy here and it's never going to a hundred percent match and that's okay. And there's number of answer of why it's not a hundred percent, it doesn't matter. But in so many cases I'm seeing 30% or 50% and this is going to be a major problem that I need to fix before seeing anything else. So I take Facebook as an example because it's a big example, but I'm going to go to the events manager and I'm going to say, okay, show me last 30 days or last seven days. How many install did you see? How many free trial did you see in total without attribution? Not the ones that Facebook think were coming from the campaigns. Not the one that Google attributes to this particular paid activity, but what we're actually sending them because they report it there. And I'm going to compare it with something like [inaudible 00:38:58] or Amplitude or Mixpanel or whatever they have that is internal before we send it.
If I'm seeing more than 10% discrepancy here it's going to be a problem. Those tools, they're not particularly sophisticated, but they're not particularly easy to understand either. So it's sometimes kind of tricky to match the dates, the geographies, the platform properly. And so to debug that, actually the data I'm seeing there is the one that is supposed to be there. I remember making mistakes myself sometimes. I don't know. One case I was like why am I seeing double the event on Facebook? How is this even possible that there's twice as many users that we actually have? It's just that there are two sources that Meta managed to deduplicate, but that doesn't show up like this in the event manager. So the first step I do is what the platform is receiving, is it matching more or less what we think is happening? Because if that's not the case, well after that they're going to try attributing to a campaign and there, there's a whole lot of other things that can happen.
One that you don't trust attribution, which I would understand, but one that among those signals that you've sent, maybe some didn't have the right parameters. Maybe, I don't know, I remember making a change and yeah, we're like, oh, only 20% of the conversion are passing. A hundred percent is passing to the event manager, but only 20% of what we think is happening is being reported at the ads. And actually Meta couldn't attribute it on AEM and it was only reporting the double opt-in like the IDFA people until we fixed it. So yeah, step one is is the total actually matching? And I'm saying it because if it's not, nothing else can work and it's kind of an easy check to do. There can be many other problems, it's just one, but it's so common and it's so blocking of doing anything that that's always the first thing I'm checking. What drives nuts is that when it doesn't work, the amount of reason that can be for it not to work, but that's something then to fix. Yes or yes, like one way or another.
That could be changing the source, that could be sending different parameters, changing even entirely or whatever. It's also because the default configuration of, for example, the Meta SDK or... Basically can put the Meta SDK and tell Meta, okay, just map my event on your own and you have... The install, for example, is the hardest to debug because it's not an event, it's something that came prepackaged. So when it doesn't work, you've got less levers to move. But for example, the paid conversion can be mapped. Something that happens very often is that a direct purchase without a free trial and a free trial that converts to pay and a renewal would all be mapped to purchase together.
How can you actually read anything when you've got these mixed bags of event, which I sometimes use a mixed bag of event for optimization, but as long as I understand what's actually passing through and I'm going to warn all of those individually to make sure okay, I've [inaudible 00:41:52] that they work properly, now I can start getting a little bit smarter around what I'm sending. And the default configuration is not as easy as it looks. I think it's one of the case where making a little bit of effort because it's not something you have to work every other day. It's something you need to fix, but then you can run on this for six, 12 months. You can never forget about it for the reason I mentioned at the beginning, but it's something that you fix once in a while and also because it gives you options.
I was working with an account where they had massive success scaling on couple different platforms, really impressive trajectory, went from zero to eight digits in a couple years, very beautiful trajectory. And then at some point things started working not as well and we tried everything we could, creative and fixing even and so on. One of the problem is that they put all their eggs in one basket and that basket worked great for a long time, but when it broke they had no alternative. And one of the reason to have your signals properly handed is also to have optionality in the sense of if what works the best for you is optimizing for free trial, so be it. Or optimizing for value, like for revenue optimization, so be it. That's great, double down on it and spend 90% on there. But you probably also want to have the other options available sometimes because it's audience expansion, and I would run an event optimization campaign towards free trial in parallel of a revenue optimization campaign and Facebook would bring different users to the app.
So for scaling, but also for the many, many case where things starts to break, which is much more often than people think. So I'd rather have different options in my toolbox basically. Maybe not for today. If something works, sure, let's go all in on it, but to have this for later. Also to prep, to experiment a little bit on the side. So one of the reason I started talking about it, about this topic was one, I had two cases in my portfolio where one, clearly their event's broken, so we need to fix it. And then we thought if we're going to fix it, let's fix it and make it smarter at the same time. And then I had another client who worked with a third party, they're already doing some kind of optimization on their own and actually already a fairly sophisticated system. And I thought, what if there's something I missed? Let's go and ask some of the best out there about what they're doing and we actually handed it over to them.
We're running two tests, one is looking good, the other one not. So it's not completely, let's say conclusive, but they definitely did stuff that we wouldn't have done on our own and go one step beyond. So these two things about fixing, getting smarter and going a step beyond, they all came at a moment where at the same time I felt that 95% of the debate on paid media was about creative production. I was like, sure should be [inaudible 00:44:45], but that's not the only topic. So I like to take my controlling hat and say I'm going to talk about something else. Suddenly I also noticed a lot more talks about that. So either really good timing or just people got inspired and worked on it. I think it was a little bit of both, but I'm happy the topic has picked up a little bit. It doesn't mean that if you've got amazing signal engineering and you'll have one ad in your account, it's going to work. That's not going to work.
David Barnard:
Yeah. Before we move on to more specifics around how to do the signal engineering, I did want to ask one more question on the data, again, just because these are kind of questions I think people would be shouting into their car as they're driving around listening to the podcast. How do you think about this when you're scaling on multiple networks? So you're doing Google and Meta and TikTok. Should each of those events managers be seeing 100% of your conversions, 100% of your free trials, 100% of everything? Is that what you're looking for in each individual event manager or are you looking for Google to be seeing just a percentage of the events and Meta to be seeing a percentage of the events?
Thomas Petit:
If you're early in this process, you probably want them all to receive a hundred percent. You don't filter anything, you send a hundred percent of these different events. Yes. It's almost normal that the same signal is not going to be the best for every platform. And it's possible that you end up optimizing for revenue in Meta and for a qualified trial in Google and for a filter of direct purchase and trial conversion in TikTok to say something. What's going to work is not going to be universal. Every network have different type of user, but also different contexts, but also different ways of optimizing, of valuing the signals that you send them. And on top of it, they don't have the same source. You might be using an MMP and send to all these sources, but it's also very possible that you're sending to Google via Firebase and to Facebook via the Facebook SDK and so on.
So the easy answer is yes, send a hundred percent of the events to all the different platform. In reality, one, they're going to be different source so they're different by nature and different things are going to work. So for example, you might have to filter younger audience trials to TikTok, but you don't necessarily need to do it for Meta because it works differently. What I do is I map these different events. So let's say I've got my free trials, I send them all, but maybe I'm never using them in any campaign. And then as soon I can verify the quality of the data that's being passed, and then I've got my qualified free trials or filtered free trials let's say being sent to the different networks. But I map it, when I say map it's like I declare it as a different event. So maybe all the free trials are going to be called start trial, but then the qualified trial is going to be called whatever it's going to be. Maybe a purchase, maybe subscribe, maybe something.
Actually, I wish that the platform offers more options in the number of events that they propose. Probably because it's not so common so that's why the list is limited. And if we remember two or three years ago, Facebook didn't even have start trial and subscription. It was all purchased from gaming and e-com.
David Barnard:
So the baseline then is that as you start to think about signal engineering, the first step is...
David Barnard:
As you start to think about signal engineering, the first step is that 100% of your data is going to all the different networks and you just verify that you're within that 5 to 10% threshold across all the different networks. Then you move on to starting to filter the events and then by doing so, then you have a better sense for what numbers should be showing up on each platform because then you can determine I'm not sending free trials of users under 25 to TikTok. And so you can do the math then of, okay, we know that when I was sending 100% it was working and so now I'm only sending 75%, and so now I have an idea of what should be arriving at that particular ad network. And then that's how you do your data validation once you start doing the signal engineering, right?
Thomas Petit:
Yeah, correct. All of this, well, I remember doing this in 2017 and it didn't come up from a brilliant idea in my mind. It just came up from the observation that my free trial rates on Google Ads was terrible. We would have, let's say, 40% trial to pay conversion on organic, on metal, small differences, and then we'd have 15% on Google. It was like, "What are we doing wrong here?" Okay, for Google, we're going to identify some of the reason those trials are not converting, and we're going to send them a different signal to try to fix it. It was not like we're not trying to be smarter just to do signal engineering, but we started from a problem initially. Maybe if you're free trial quality coming from paid is great, you don't need to go further for now. Maybe you could do even better. But I started from a program. And then seeing that this was moving the needle in ways, I said, "Okay, now I can get a little bit smarter."
One of the next step is that historically all these platforms, so they offer instant optimization, which honestly you should never do. There are a couple Edge cases where you might, but in most cases, a very bad idea. There's event optimization, which can be different event, they can be filtered and so on, and we can talk a little bit more and there's revenue optimization. And the problem with revenue optimization is that from the gaming world and e-commerce world, this was just revenue. It was just revenue that was generated by the sale. But in subscription, because we've got the free trial and because we've got the renewals, the revenue that is actually happening that day is actually not what we need to send back to the platform. So I started getting smarter and say, "Okay, I'm going to engineer the revenue that I'm sending."
So it's not a predicted LTV, it's a predicted value at day X. Maybe it's months two, maybe it's personally I like to use months 13 because I've got the first yearly renewal and I think it's a better comparison between months and yearly. But whatever, it doesn't matter what I like. The real revenue, the one that passed by default on the SDK is not the one that I want to be sending because the free trial conversion is going to come too late because it's going to overvalue the yearly over the monthly, but maybe... Or let's take the weekly, sometimes the weekly... the LTV of a weekly is great because the price is so much higher. If somebody renews for six months on a weekly plan, the revenue where we want to send to the platform is not the real one. Typically, platform are going to over-index on yearly because all the revenue comes on day one, but maybe your weekly plan has a higher LTV when people renew a lot.
So you're just telling the platform something that is wrong about what you're looking for. And here the idea is not only to trick, and I remember talking to Andre about senior engineering and they summarized it that, "Oh, Thomas is manipulating the ad platform." I'm like, "No, I'm manipulating data to send the ad platform what is the closest value of the users they're sending me. I'm not trying to lie here. At the contrary, I'm trying to fix something that is broken in the sense of the platform is receiving a value that is not representative of my business value. The default configuration is not representative." One thing I'm working on right now, for example, is that users that are not converting within 24 hours but are demonstrating a very high likelihood of retaining for long as a freemium user or converting later, we're going to assign a small value to it.
So not the 10 per month or 90 per year or whatever, but maybe wonder... because I'm like, "Hmm, I like this user and if I send zero, the zero even, the zero value, the platform is going to conclude that this is the kind of users I don't want." And maybe because they create the network effect, they create DAU, they create late conversions that are very hard for the network to assimilate, I'm going to have to send something that shows them, hey, this user is really zero, but that one is actually a little something. The whole scheme there is I want to tell Meta, I want to tell Google and I want to tell TikTok about what's valuable for me because it's not natural for that... They just received a very primitive signal. I heard from Meta last week that they were working on optimizing towards different event at the same times, for example, different tiers of subscription and stuff.
And I was very happy because it's something that I always wanted to do and that you can almost only do by engineering the signal and engineering the revenue behind it. So we're getting there. It's just I want to send something that is representative of value so that the platform can do the work they do the best way they can. And they're really, really strong at predicting which users are going to complete the action that I'm telling them. But if the action I'm telling them is wrong, you're running on one leg literally. And that's where the whole idea of signal engineering and optimization of the data that you're sending back comes from is like, yeah, send the network something better and they're going to do a better job because they are doing a better job. It's you who are not doing yours.
David Barnard:
Nice. So you've gone into and given a lot of what I would guess are more advanced signal engineering practices, but let's step back and once you get your data right or believe you have your data right, what's low-hanging fruit in signal engineering? Where would you say are the first few steps folks should take to start changing the default mappings of how these SDKs operate by default? What's the first step?
Thomas Petit:
So we mentioned that there was the second step, because the first step is monitoring that the events are passing properly, which is not a given. But the second step for me would be either going to what we call qualified trials or filtered trials. And typically that would be when you see that your free trial rates are either not as good as other networks or not as good as what they used to be. And also because you don't need huge engineering around revenue prediction and so on to do this. So basically make a regression of, okay, let's analyze this free trial that are not converting. Sometimes it's a combination of what I call hard factors, which are like the device itself, maybe the language, not something the user has told me, but something that is in the hardware itself. It's well known that the most recent device converts much better, and so maybe I'm going to filter all these very old iPhone.
Actually, it's funny, when you look at the specific device, the very latest device on Android, the Google Pixel and the latest Samsung Galaxy and a couple of others, they convert just as well as an iPhone. They only represent 5 to 10% of the market and the rest have a terrible conversion rate, but certainly... So you could try to weigh in those, and Facebook has been working on the signal engineering because there's very few other levers that we can do, but Facebook has finally put out of beta one that we used to call it bit modifier for a very long time and I could tell Facebook, oh, when you're seeing one of those Google Pixel, straight assign 20% plus than any other users, because I know they're going to convert easier, they're going to renew longer and so on. But when you're seeing one of those old Android 7 or whatever, don't even show them the ad, but then very often your choice is binary. Let's take the Edge, because its subscription, Edge is a very common factor for people entering a free trial and not converting.
For a very long time, the solution was, okay, let's stop advertising to people below 25 and then I don't have this problem anymore. I was like, but I'm actually missing out on something is that there's a lot of inventory. Part of what I'm doing, my app fits 15 to 25 years old. It's just that I have this free trial issue that is going there. I just want to factor it. I don't want to exclude everybody below 25 because I'm limiting my options, and so this is just another way to do it differently. So somebody asked me recently, "Oh, but now that I have the bit modifier, I can tell Facebook that the sub 25 are 30% less valuable. I don't need to do all these filtering of trials." I prefer to hammer from both sides personally, but sure it's a potential replacement, but it's not available everywhere. The low-hanging fruit is probably going to be on qualified trials because it's such a common factor, but it doesn't mean it's for everybody and what is the health factor? And the other one is self factor, and the self factor is something the user does.
The most common is a question at the onboarding that somebody answers and that filters, maybe they declare disinterest and not this one. I have a mental health app where when people say they have a high level of anxiety, that's so much more valuable than when they say something else. And so we factored this question. The next level, the very sophisticated level would be to actually add questions at the onboarding that creates this variance, and that's what you're looking for. So the low-hanging fruit is looking in a regression of the value based on questions like this. And in revenue, for example, I was working with somebody who was super committed, "Yeah, I need to do something very sophisticated with my revenue sent to the ad platform. I've got a whole team of expert of data analysts and engineers that we can put some predictive LTV back to Facebook and so on."
Yeah, let's look at your revenue. Actually, you've got zero variance. You've got a binary case where almost everybody ends up generating this in the same range of value. In this case, they no need to overkill it. But if you notice that you've got extreme variance, then it's probably a case of jumping the qualified trial and go straight at making these revenue buckets because you've got a bunch of users that are going to consume $1 IAB, you've got a bunch of users that are going to take the $20 subscription per month, and then you've got the pro user with the $200 code subscription, I don't even know what it was, but a [inaudible 00:58:33] subscription. And then if you've got extreme variance in revenue, I think the low-hanging fruit is to deploy the resources [inaudible 00:58:40] because the qualified trial are not going to be enough. It is just like step one. So it depends the problem you have or the problems you don't have. If you don't have variance, don't bother with it. If you don't have a free trial conversion problem, don't bother filtering your trials.
David Barnard:
Let's dig into exactly how you do this. I liked your example. If you're a mental health app and you have an onboarding question that says, "What are your goals in using this mental health app?" And one of the goals is calm my anxiety. Another goal is just like I just want to be healthier, some really broad, generic goal. And so you see the people with that broad, generic goal, their value is just way lower. And the ones who have a specific problem to solve like anxiety, their value's way higher. How do you actually instrument this? Is this inside the app that when you map the events, you're not actually sending the events? And do you have to prevent the SDKs from just hoovering up all the events so that you can send the specific event that you want to send? How do you actually do this?
Thomas Petit:
Sure, I'll answer you, but I'll make a cautious word before, which is filtering out, excluding people can work. And I've done it a number of times, but it's a little bit my last resort. If I can find another way to not fully exclude these people because they might generate less value, less is not zero. So excluding them is actually something that I try to avoid in the first place. And when I look, let's say an onboarding question and I'm going to have high anxiety, I want to be healthier or I just want to sleep better, and I'm going to look, if something is extremely less value, in which case I'm going to try to exclude it. If it's a little bit less value, I'm going to try to find something else. Either I'm going to let it pass and deal with it, I'm going to move to revenue optimization or I'm going to try to find if it correlates with a particular demographic to use the bit modifiers or I'm going to try something else.
The exclusion is, if I can't do any other way, then I'm going to exclude. Let's say I need to filter out some people. Basically, some people in the onboarding, they're just telling you, "Oh, I'm just browsing around. I don't have really the intent." This is basically what they're telling you, and those I'm probably going to want to exclude. In this case, either you've got the platform SDK or typically you're sending it through an MMP, more rarely through a custom API. The custom API, you send whatever you want. So you filter it yourself, you send it, but it's the least common case. Through the MMP, what I'm doing is actually asking my engineers, I'm going to want to add a new event. I'm not going to filter the existing event. The free trial is going to remain a free trial.
If there is a free trial, and the answer to this question is this, then send it to Adjust, AppsFlyer or Singular as being subscribed or whatever, and then Facebook is going to start receive it and I'm going to have it. It works the same in the Facebook SDK. You just can have the default configuration, but then you can set up your own events. I'm not talking about custom events, it's just you can force, but that's done on the engineering side. That's something the marketers are not doing. That's something that requires to open the code and actually do it. So I know marketers are getting smarter with code these days. And this is one thing where I called it signal engineering because there's engineering behind it in the sense of I'm actually tweaking the system a little bit but also because very often you do need an engineer. It's an interesting dynamic how very few developers are actually interested in all things marketing.
There's always a little bit of attention in this ticket because we marketers are terrible to explain our requests and to properly define the requirements, and so it comes into back and forth. But also, I don't know a lot of engineers who love the tweaking the Facebook SDK or the MMP SDK. That's very rare. I got a few buddies now that I can rely on, but it took many years to get to this point, and it's actually very easy to execute. But the fact that the marketer who's requesting it and the developer who's actually making the change and not speaking the same language at all, it looked very complicated. And even now I'm surprised sometimes in the couple of this conversation where I'm just answering, "Oh, I have no idea how you do that, but I know it's possible. Here in the documentation it says you can do it.
Here I've described what I want. That's your job to deal with it, not mine," and it always gets down. But my technical knowledge gets to a limit pretty fast when it gets there, but with an MMP SDK it's not very complicated either. Once you figure it out, once it can get more complicated later. For example, for value, for the value optimization, one thing that happened is that it doesn't go through an SDK, but through an API that sends back the information to the SDK. I'm like, okay, so I've got this combination of... we could hard code it. We could say, okay, if there is this plan that is done on this device having answered this particular question at onboarding, then the value is 56. And if it's the other combination, it's 23, and we could have got this.
But then every time I need to update, and that happens constantly because we change plan, currency, value change and so on, I would need to have the code touch and then another release. So if I know I'm going to do a lot of signal engineering later, what I'm doing is actually there's an API call and say, "Okay, I've got all these parameters. What's the value right now?" "Oh, 56." And next week it's going to be 68, and I'm not going to have anyone touch the code anywhere, but I can dynamically change it. And one particular case where you do want it, this is when you operate in countries where the currency tend to fluctuate a lot. Turkey is a very big one for example, where yeah, if you hard code the value there, you're dead. But that's sophisticated and it can break because then you've got the SDK talking to an API. It's not necessarily where you want to start, but eventually it does make a big difference.
David Barnard:
It's a great insight, too, that as marketers, it's something to be cognizant of and to work toward is better understanding at the code level what's actually going on so that you can work more collaboratively with the engineering team to successfully deploy these things and then to understand what's going on under the hood to be able to troubleshoot when it's not working well. And to your point, the best implementation is one that is way more flexible and not hard coded versus writing a bunch of hard-coded stuff into the app that every time you want to make any little change, you're having to go talk to the engineering team to make those changes. And then when something breaks, you got to push an app update. And so are there any specifics around building that out that you've seen work really well? With certain clients that you work with do you have tooling in place that's making this a lot easier?
Thomas Petit:
When I asked the developers, they tell me, they're like, "Dude, this is a very simple API." It is like there is one call, I reply the value, it looks good. Once they've done it, they're like, "Oh, what you asked was actually super simple. Why did we take three months to get there?" I say, "Because your explanation was wrong, so it was also my problem." It was basically a communication problem. I haven't used many tools around this, mostly internal tools. And then you have to adjust depending on if you're sending to the platform SDK like Firebase or Meta or via an MMP or even via [inaudible 01:06:02] CAD and so on, so you have to adjust, but the rest was mostly internal. I'm working now with a company called Voyantis who does this filtering on their own and then send it back themselves. So I discovered a few new tricks.
There are very few people who are working on this. I haven't heard particularly of tools that sometimes you might use a tool that's made for something else in the middle, but technologically it's actually fairly simple. It's just it doesn't always work. There's always a little bit of the bug. But in terms of code, it's actually something that's not very complicated. For me, the most interesting part is not so much the execution, which is, there are issues, it's regular, but there's nothing as a breakthrough there. It's not that you code it better that the signal is going to be better. It's really how do you define the signal that is going to make it. And all these questions around onboarding and about segmenting users, creating buckets of value is actually really interesting because if that's a crossing of... There's a lot of data analysis, but there's always new stuff we discover. There's so many factors, and I like to play around it across different apps.
And I'll tell you one, for example, we noticed with somebody I was working that the completion time of the onboarding was a decisive factor in value. And we were like, okay, those people who are answering it in less than a minute and there was 30 screen, there was nothing... They were like, yeah, probably they click next, next, next, next, they start the free trial and they cancel. And I start bringing this to other apps and they're like, "Dude, I checked and actually my fastest onboarding completion are actually my best users because they already know what they want. And so there's not universal necessarily, and this criteria is not a great one, but there are a couple that are usually very big. So Edge, we mentioned. The thing around goals is very often a very big one.
And a lot of apps have different types of users, they would have more pure consumer users like full B2C, but then they would've small teams, a lot of solopreneurs, people with a Shopify shop or influencers who are actually, they're a one-person business and there's a lot of them. If you manage to identify who are these small business, one-person business solopreneurs, they typically have a completely different... Of course, if you're improving their business because the post gets in more or the video or higher quality or whatever, the price sensitivity of this, because it's their business is obviously very different from somebody who's just trying to make their photo look better. And this is one that is one of the most common things. So yeah, age, goals, and I don't know how to define this typology of users between consumer and prosumer and real businesses and big businesses. This one is a very big one.
David Barnard:
Yeah. I know you talk the MMPs a lot and have some limited insights into some of these ad networks. Is this something you think the MMPs and ad networks could do better? Should AppsFlyer and Adjust and Singular, should they be building these kind of signal engineering tools into their tooling to make this easier? And then you already mentioned the bid multiplier and things like that that Meta is doing. Do you think these kinds of things are going to get easier and easier over time as the value is recognized and then both the MMPs and the ad networks are going to make this kind of stuff easier?
And then just to complicate the question a little bit more, how much of this will ultimately start to get solved with Meta specifically, but all the other major ad networks are using more and more AI to figure... and machine learning, really, it's machine learning they've been doing a long time, but we'll just call it AI. How much of this is just going to get better and better by default as the platforms are able to figure these things out with better machine learning and more sophisticated tooling on their end?
Thomas Petit:
So there's more or less three question. I'm going to answer in a different order that they came in. As of the platform offering more options to put a weight on each value like the bid modifier on Facebook, which is now called value-based rules. Even though those two things are not the same, but it doesn't matter. I think we're not going to see a lot of it. It's not the direction it's taking. The direction it's taking is, "Hands off, marketers. Let us do it. We're doing it better than you will ever do," which I tend to disagree with. Not that I wanted all manual, like it was 10 years ago. All the machine learning that has been deployed toward conversion optimization by Meta and Google has been insanely successful and we don't want this back.
We would be much worse off without it, and you can see that it's where the smaller platform have a problem because the amount of effort to deploy this cost the same to a small platform to a big platform, but the reward is not the same. So smaller platform tend to be a lot more primitive and it plays against them. And that ends up that we marketers end up concentrating all our money on the same baskets because those buckets are much more efficient, and they don't want to give us more control. So Facebook actually does it a little bit, but all the others and the direction it's taking is actually less of it. So I'm not expecting to see more of it.
And that's why I need to do it more on my side because the platform gives me less and less levers to pull, so I'm going to play on the lever that... They can't prevent me from deciding what I'm sending them and they do want this signal anyway. So this is where it's still going to get deployed. So that was one question. Two, could the MMP help there? I think they could. They could by making it a little bit easier also because they see patterns. There are only a few MMPs and they have a very large number of clients, so they do see patterns and they could communicate not as, "Oh, Playrix is doing this and Tinder is doing that," but more like, "Oh, in this vertical it's very common to filter trend," or, "In this vertical, be very careful about how you send your ads revenue," or whatever. So one, on the educational side, and I don't think it's been dealt so much, and two, in actually making-
Thomas Petit:
I don't think it's been dealt so much, and two, in actually making it more default. The problem here is that everybody has their own and typically onboarding question is not something that you can have default in any SDK. It's going to be the actual developer who needs to declare, "Okay, I've got this question." It's never going to be the same across two apps. So it's kind of hard for them to deploy and scale. I think it should be part of the educational package of like, "Oh, you can do this, this, this."
I think they haven't done it much because it's so unique to every app, but also because in their place it's kind of like, "Yeah, as soon as you send us an event, we're going to send it to the platform. That's my job." And we're already doing it. As soon as you send it to me, I'm going to process it nicely, but maybe they could do a little bit more.
And the last question was, is AI going to replace this on the platform side? And in part, yes, and it's already what they're doing, but they've done as much as they could and they've gone very, very far. The last piece in my opinion is on our side. Because it's so unique and because I don't want the platform to deal with my data the exact same way that they deal with the average of my sector, I want it to be the closest possible to my, and sometimes even my own internal goal, they change over time.
I'll tell you one example. I have one particular case where we used to optimize towards a predicted LTV and then suddenly we realized that cash flow was more important, and we realized that it was not the same user who bring the maximum LTV CAC or return after five years than the one who bring it after three months. And we're like, "Okay, right now it's more important that we maximize short-term payback. All right. Okay, let's change it." And this is not something the platform is going to decide for me. I have to decide this for myself. But also because every configuration is a little bit unique. That's it.
I believe there will be AI and machine learning, but not on the platform side to decide on my side. That could come from Amplitude or Mixpanel. That could come from RevenueCat who says, "Look, we've got access to all your subscribers, and we've detected that there's 20% of those that are very low value, 20% of those are extremely high value, and we're going to package it into, here are three events if you want to use them, use them. If you don't want to use them, use them." So actually here, I don't think it's going to be Facebook and Meta who's going to do the job or AppLovin or whoever. It's a little bit hard for the MMPs, could be, but I think it's more going to come from product analytics and subscription data tools like RevenueCat too. Actually, it's more of a request. It would be very nice to have, but I think you're in a better place to do it than MMPs.
David Barnard:
Right.
Thomas Petit:
The data you have around revenue is much more solid.
David Barnard:
Yeah, that makes a ton of sense.
Thomas Petit:
I came back to you fast here, huh?
David Barnard:
Yeah. So I did want to get into the more advanced side of things. So we've kind of stepped through this. Step one, make sure your data is good. Step two, pick off some of the low-hanging fruit. And then you said step three is to really dive deep and do some of these more advanced stuff. And you've sprinkled in a lot of the more advanced stuff along the way, but I wanted to give you an opportunity to really nerd out and go deep onto what these more kind of advanced signal engineering processes look like.
Thomas Petit:
In my opinion, as soon as you get a little bit sophisticated, it's going to be done at revenue level. You're going to share tweaked revenue, filtered revenue. It's never going to be filtered events. The problem of events is that they happen, it's binary, either they happen or they don't happen. And the platform, they can only optimize toward one event, not two or three. I mean, you can add them up, but they're going to have the same value to the platform, which may change in the future, and I hope Meta is going to release this and then eventually Google, but we're not there yet.
The most sophisticated stuff is done at the revenue level because I can be so much more granular about, this user's worth $1, this user's worth $10. That said, I'll do my own devil's advocate against my opinion, like some advanced stuff can be done at the event level. And I know Voyantis for example, believes that the timing of which you send the event is actually creating a difference and they proved it against what I bet it, so they won on that one. I'm still not very good at engineering it myself, but they showed me that it's true. Actually sending it earlier or later could make a difference. Not too late either.
But sometimes you've got a free trial happening. But even the onboarding question and the other criteria we talk to, they're not good enough to decide if this free trial is highly likely to convert or low likely to convert. You could perfectly prevent the event from being fired, wait for one or two hours, notice that, "Oh, this group of users have done three activities, three meditation, three workout, three whatever within two hours, and this group has not, and I'm only going to fire those ones and not the others." So you could delay a little bit the events, still work on event, and do it. Personally, I think all the sophisticated stuff happened at the revenue level because you can be so much more precise about how you send it.
My other argument against the timing is that, and I'm going to join the sophisticated question back into the low hanging fruit, is that I'm entirely convinced that whatever comes after 24 hours is useless for the platform to optimize towards, which is a very hard ask because a lot of apps, especially in gaming, need a full week of seeing how users come back to the app, what they do, do they convert later? Do they actually start with a small purchase before going up? Yeah, it would be great, or with free trial is the same case because then you've got three days.
I had the question recently, somebody was like, "Oh, if I reduce my free trial from seven days to three days, it's going to be much easier for the platform to see this data after the third day." I was like, "No." Anyway, if it's not the first day, in my opinion, it's not entirely useless, it's not a hundred percent useless, but in my opinion, you need to find something that happens on the first day and that makes that you might prevent the event from being shot in the moment it happens, but you want to send it in the first hours. You wait a few hours and then you send it and then that's it. That's what you've got.
So one of the low hanging fruit here is don't optimize for your seven day or 14 day whatever is optimize something for day one and the advanced stuff is move it to revenue so you can tweak it further and further and further and then you're going to edit on the goal, and it's going to be, and you add a new plan and you can easily add, "Okay, I've got a new type of plan or I've got a promo and I'm going to edit the value very close to what I think X..." And that can include also, for example, you could detect that refund of a particular pattern that some people who are more likely to refund, they've done this and this and that, and then users who've done this and this and that, you would assign a slightly different value.
You're never sure, I mean it's all probabilistic. It's not like at user level. Yes, a user converts his trial or refunds or renew, but then in campaigns you're going to have a bag of these users and you're trying to get as close as reality as you can get. And one thing that is actually hard is I've got this with a team right there where we're monitoring the value, we're sending the platforms against the real value that's happening and we've got this curve that are following and as soon as we see that there's a gap in there, we know something is going wrong.
As dynamic and sophisticated as the model is, it never follows the exact reality. So we're looking at closing the gap always and always. This developer works on being as close as possible as the real value. But then you can engineer further and add this other conversation with you can fake it and typically something that I do is I like to amplify it to make the network's job even easier. So let's say, let's simplify the situation. I've got users worth $5 and I've got users worth $50. What I'm going to tell the platform is that these $5 are actually $3, but that this $50 is actually $100 and I going to force it, I'm going to make it a little bit more extreme. The trick here is that it makes that I can't really read the value anymore in the platform. I can't read my raw ass on Facebook because it's all fake. I'm going to look at it on my side, but it's okay. I'm going to try to force the platform to gear toward the users that are most valuable to me by forcing it a little bit.
And this fun conversation with someone from gaming who said, "Oh, I do the opposite." What? So when you have a high value user, you take the platform that is a medium value user say yes because we've got wells and then if I really declare a big well of $500 or $1,000, Meta is going to say, "That's it, my job is done for the day, I'll bring that one user and I'm done." So they actually cap it. The thing is that it's a lot more rare in subscription that you want to do that, but let's say you've got this tier of subscription with ChatGPT, maybe ChatGPT is going to fake it that the $200 plan is actually only $100 because otherwise the campaign could derail faster if too many of these users come simultaneously.
So you want, because at the end of the day the machine of optimization, they're very sophisticated, but they're also a little bit dumb and sometimes when they see a couple of good signals coming, they start completely changing delivery. And yeah, we're playing with fire a little bit here because one, we experiment on the way and I made a few mistake on the way. But two, it can derail pretty fast in a way that you haven't predicted, which if you look at it from the network side, from upload in or from Facebook, you look at it and say, "Yeah, it's kind of logical that if they saw all this revenue happen that day." The next day they deliver in a way that basically it went too fast at two weeks.
So if you need to find a balance. Me, often, I make it more extreme than it is, but there is a limit to this. You can't go too far with this. Don't push it. I also had a very interesting conversation with somebody else who was like, "Is it better to fake it that you're making more revenue so that Facebook will think you're super successful and serve you more than other advertisers or will they actually come to eat your margin and say, oh, this guy is making too much money so I'm going to under deliver it?" I don't have the answer to this question, but it's kind of an interesting one to think about like, how much the platform itself is reacting to the change we're making on that side.
David Barnard:
Yeah, it's also fascinating. So in this advanced scenario, we're actually manipulating the revenue that you're sending back for the revenue optimization goals. How does that actually play out? So you're looking in amplitude mix panel or internal dashboards and you're determining back to our meditation app example that somebody who answers anxiety and does these other three actions is our super high value user. So you're kind of backtracking from known data and the signals that you're seeing inside the app and then assigning a value to that. And then to your example earlier as well, it's like if they have this general need but they're still monetizing, but they're not monetizing great, then you assign the $1 value. How are you doing that and how do you think about that?
Thomas Petit:
Before you go and take risk of changing the value itself to something that you faked and you think you're smarter, I think I'm smarter and which can backfire pretty fast, the logic of it is just making a regression of, where is their variance? I know I've got users of different values, which are the criteria that creates this variance? Is it an onboarding question? Is it the device? Is it the time code to completion? Is it this? It's basically crossing this, okay, I've got these high value users and these low value users and these zero value users, what each do have in common that the others don't? Basically a regression. The reality is that many times I've explained to data analyst that I want this job to be done and that his goal is for him to find a criteria that I didn't spot by myself, like a new criteria.
The reality is that the criteria, they're very often the same. They're the one we mentioned, they're the big one. I guess as you go more sophisticated, and I had an interesting talk with Wayantis about this again also, but the number of activities that people do and when they do it and the time of day and if they install on the weekend and then it's a different pattern, they go much further than this. But I think for most people, basically looking at where does variance comes, like regression areas of which are the criteria that create variance between people who refund a lot between people who renew a lot and usually you've got 95% of the job done. Like, if you look for this where the variance is coming, it should be good enough for most cases.
David Barnard:
Are any of the tools particularly good at this? I know Amplitude recently has this AI variance detection kind of stuff. Are any of the tools especially good at that?
Thomas Petit:
I wish. The problem is that very often the data is a little bit scattered. Amplitude would have very, very good but then the connection with RevenueCat similar needs to be extremely good as well for the revenue to be proper. Very often it comes from a variety of source. I think it's going to come, but in the case that I've seen myself, it was done manually from backend data because it's the only place where we had the liberty to dig and query the data in very, very specific way ourselves also because we could extract it and the data analyst could run its own model on it and not feel limited by what those models are offering. But also in one particular case, because they had very sophisticated internal tooling and they do have Amplitude and they do have an MMP and so on, but in their own backend they've got the equivalent of revenue cut and the equivalent of Amplitude for every single screen.
And so obviously in this case, the data was more complete there. If somebody has a mixed panel Amplitude connection with run account that is really properly done and a very large cohort of users, eventually it is going to be done automatically. Maybe there's a joint work to be done here between you and them, but most of the ones I've seen, they were manual. And I think the reason is that one is a little bit of a pioneering field still. Two, there's a lot of predictive stuff. So it's very unique. And three very often you would engage different source of data together. And even though you have radio cuttings there, and even though for example in Amplitude, I can plug also the MMP data so I can filter by source, it's not something that is supernatural. It's not something that is completely well done out of the box because it's made for product analytics people. It's done for product managers and not for user acquisition specialists.
David Barnard:
Yeah, it's not a revenue optimization tool, it's a product analytics tool.
Thomas Petit:
Exactly. It's not what it's made for. And I don't think it's impossible and some people will do it like this, but I know I've struggled with some limitation in some cases.
David Barnard:
That makes a lot of sense. Well, we've gone quite long, but I wanted to give you an opportunity for kind of a grab bag here at the end of other considerations that we haven't discussed. So one of them that we talked about on the webinar was the trade-offs you make in volume of events versus quality of events. So why don't you cover that first and then just a grab bag of anything else that you think people should be considering as they embark on their signal engineering journey?
Thomas Petit:
The volume versus quality. I mean we didn't comment it today, so for whoever has missed the previous webinar and not read the article that I haven't posted yet, necessarily they've missed it. So I'll go back to it because it's pretty fundamental. But yeah, you can have a fantastic event. If the platform is seeing one or two every day, it's not going to work. The filtering I mentioned before, for me, filtering out people is the last resort, but that's not true. If I've got a very large amount of data, I would be more inclined to exclude people. And the number of events you send? So here is at campaign level. Like every campaign, basically there's a threshold of optimization that networks, there's no exact number, but the number that's floating around is somewhere between 50 and 100 per week. So I simply find my head and say below 10, even per day per campaign, things are going to go a little bit wrong.
It's a bit, the more, the merrier. So if I can have 25 instead of 10, I'd rather not filter and I have 25. And if I have 40 instead of 20, then I have 40, it's better. But once you get there, there's very little upside to sending 300 every day and there's a lot of upside of sending only 30 every day, but that are super filtered. So in this case, this is where the trade-off between volume and quality is. So if you're too early, basically you can't do qualified trials almost because you already only have five trials per day. So don't try to filter them. Almost, you'll want to go higher up before the free trial and say, "Okay, these users haven't tried a free trial, but they've shown very good intent around onboarding questions upon the first activity I'm going to also send them so that I have more event."
And so the more event you have, the more you can filter around. That's pretty clear and this trade off is pretty fundamental. The day on the webinar, we didn't talk too much about timing, but I think I covered it decently today. Also covering the other part, the [inaudible 01:29:09] part. But the timing is important. And my rule here is don't wait too long. You can wait a little bit, but don't wait too long. So timing is one, and the amount is one, the amount is very critical. It's something because over time, maybe the best engineering you can do today is very different from the one you can do in six months. Just because your budget has moved from a $100,000 to $500,000 enables you to do a very different kind of filtering also because your cohort has grown. So before, even with the teams that have the most sophisticated model, we keep revisiting the value that we're giving because plans change because we make monetization change because the mix of channel is different as well.
And the whole user, maybe because there's a crisis, maybe because there's this, if you are based on an aggregation of five years of data, here is also a big trade-off about how recent or how much data do I have. And I've got this rule for example, in one case, which is if we've got enough users in the last two weeks to make a decent prediction, we use this. But if we don't, then we expand to three months. And if we don't, then we're going to loosen up one criteria. Could be a country, could be a nobody question and so on, because otherwise, and I made this mistake several times, you create useless fluctuation on very small samples. So obviously, the more data you have, the easier it gets. It not a fixed thing. So the advice here, the TLDR is revisit your assumption every now and then every six months or whatever, what is control, that gap between what you're sending and what is actually happening is not too big.
And the thing about this gap is the trap is to look at total. "We only have a 2% deviation." I'm like, "Yeah, but there's 10% of people who are responsible for 80% of this deviation." So this is the one, so don't look at this gap only in total. Do monitor that. It doesn't go sideways too much. Country is a big one here. Country platform. Maybe not every single criteria is needed, but at least monitoring the value of the events that you filter are fairly reliable at country level over time is kind of a good idea.
David Barnard:
All right, well this has been a blast. So much great information. This is why we got to do it annually. You're constantly learning new things and then you're so open to share. And then I get asked all the time, "Hey, can you make an intro to Thomas?" I'm like, "Thomas is busy. He's going to be busy the rest of his life."
Thomas Petit:
I'm too busy researching new topics for David for the next one.
David Barnard:
Exactly. But this is why we'll have to do it annually is that you have your stable of great clients that you're making learnings from and so you're not hesitant to share like a lot of people. And so I really appreciate you're always just digging in there and sharing what's working. And so this has been a masterclass in figuring this kind of stuff out. So thanks again. And then a quick plug, Thomas is actually going to be doing a hands-on workshop at App Growth Annual, so that's going to be October in New York City 2025. So if you want to go super hands-on with Thomas on signal engineering, sign up for App Growth Annual. The workshops will not be live-streamed specifically because we want people to be able to share and really open up and have real conversations. If you're wanting to dig deeper, you got to be in person for the signal engineering workshop at App Growth Annual. So I'm looking forward to seeing you in a couple of months and thank you so much again for the conversation today.
Thomas Petit:
Yeah, likewise, looking forward to seeing you in person and thanks again for inviting.
David Barnard:
Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by Chat.subclub.com to join our private community.

