The Post-Attribution Playbook for Growth — Eric Seufert, Mobile Dev Memo

The Post-Attribution Playbook for Growth — Eric Seufert, Mobile Dev Memo

On the podcast: how measurement dysfunction paralyzes growth, why diversifying channels for the sake of diversification actually hurts performance, and the futility of trying to interpret why ads win.

On the podcast: how measurement dysfunction paralyzes growth, why diversifying channels for the sake of diversification actually hurts performance, and the futility of trying to interpret why ads win.

Top Takeaways:

📊 Broken measurement kills growth – Without one shared framework for “what good looks like,” teams spin in circles.

🌊 Don’t diversify just to diversify – Extra channels add overhead; scale with a waterfall approach when your core channel saturates.

🎲 Stop asking why an ad worked – Treat outcomes as noisy; improve the inputs and process that raise your win rate.

⚡ Ship speed over certainty early – Kill losers fast, let potential winners age in cohorts, and expand spend in steps.

🧩 Engineer better signals – Design events that reveal real intent/LTV so platforms can optimize toward customers who stick, not just click.


About Eric Seufert:

👨‍💻 Quantitative marketer, media strategist, investor, and author.

📈 Eric shares expert advice on the Mobile Dev Memo blog and is an investor at Heracles Capital.

💡 “The way I approach creative testing is trying to identify losers as quickly as possible. The winners take time to prove out, but the losers are pretty quick to prove out.”

👋 LinkedIn

Follow us on X: 


Episode Highlights: 

[1:00] Intelligent design: How to effectively incorporate AI into your business strategy.

[4:52] I, Robot: Machine learning =/= generative AI.

[8:36] AI Pitfalls: AI works best for automating tasks and coming up with ideas — not generating brilliant creative assets.

[17:29] Predictive AI: Brand-specific, full-fidelity video ads generated by AI could be a reality within 18 months.

[33:25] Risky business: How to effectively diversify across advertising channels to optimize ROAS-adjusted spend.

[37:43] Measure of success: Above all, make sure your measurement system is coherent and has cross-team alignment.

[42:04] Tortoise vs. hare: To balance speed and efficiency, identify your ad “losers” as quickly as possible.

[44:43] Missed opportunity: Good marketing comes down to embracing some uncertainty and minimizing the rest.

[49:23] Human touch: Why generative AI creative tools probably aren’t a worthwhile investment right now.

David Barnard:

Welcome to The Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat, thousands of the world's best apps. Trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.

Hello, I'm your host, David Barnard. My guest today is Eric Seufert, media strategist, quantitative marketer, author and investor. Eric currently shares his musings in the mobile dev memo newsletter, blog and podcast and invest via Hercules Capital, his early stage venture fund. On the podcast, I talk with Eric about how measurement dysfunction paralyzes growth, why diversifying channels for the sake of diversification actually hurts performance and the futility of trying to interpret why ads win. Hey Eric, thanks so much for coming back on the podcast.

Eric Seufert:

Great to be here, David. Thanks for inviting me back.

David Barnard:

All right, so we put a Google form out there for folks to ask questions and we got some really good questions in, and so I'm going to get to those in a sec. But I was kind of surprised that nobody asked about how to use AI, what's going on with AI. It's like that's all everybody's talking about on Twitter. So I was kind of expecting more questions around AI.

So I'm going to selfishly lead with my own question because I do feel like we're at a bit of an inflection point where things are still early, but it feels like you're going to be left further and further behind if you're not at least starting to experiment. So what I wanted to ask you is where do you think the low-hanging fruit is right now? Teams that you're seeing be successful, what do you see them doing and using that's effective today? Not like, "Oh, six months from now this'll actually get good." but what's good today?

Eric Seufert:

There are a lot of dead ends that I see companies pursuing with AI, and I think my advice here is if you are going to embrace AI, and I think it's important to just maybe take a step back and define what we're talking about when we say AI, right? It's like a catch-all term at this point, but it shouldn't be though. It shouldn't be. And so I think when you're talking about AI, you're fundamentally talking about replacing human decision-making with some other mechanism.

That's ultimately a big concept, what you're talking about. And when people talk about AI, they tend to focus on the output. I write a prompt in a chatbot and there's a bunch of texts that gets generated, or I write a prompt in a text-to-image tool and there's some images that get created, and I think that's probably not the substance of how you transform your business with "AI." right? Here's my advice, if you want to actually embrace this in a transformative way for your company, not in a superficial way.

First of all, start with first principles. What does AI mean to your business? So you know that I'm, among other things that I do, I'm the Chief Strategy Officer at the Fabulous, we make health and wellness apps. They're all subscription. So I've got a genuine reason to be here on The Sub Club today. And so I'm leading this effort at Fabulous. And what I did was I just worked with the founders and I was like, "Okay, let's start with what are our principles around AI? The use of AI within the company. Let's just define what we want to actually achieve with AI."

So this company, Zapier, they released this document that they were using internally, which broke each organizational function down into a line item on this matrix. And then going across the columns were the levels of implementation of AI from unacceptable or no implementation to the full embrace of AI, and they just describe what each team should do to sort of cascade across these columns. And so we did the same thing, right?

This isn't like a top down mandate. We presented this to the teams and we said, "Right, fill it in." You tell me what is the sort of complete transformation of your organizational function through AI, and you tell me what is an unacceptable non implementation of AI, and then we'll just sort of plan out a roadmap to get from no implementation to the complete transformative implementation, and then we'll decide what resources are needed. And so the teams themselves get to decide how they define that roadmap and pursue that roadmap, but there is no option to not do that.

That's how I would start. Essentially, how do we absorb this into the culture of the company and make sure that every single functional team feels enabled to do this? And they also feel sort of like that. They have the agency to define what that implementation looks like. I think that's really important. So that's where I would start. Just on the marketing side, I think again, not focusing on outputs, but actually focusing on automation and replacing the sort of human effort with this machine handled mechanism, thinking about anything that would move the needle.

So if you want campaigns to be optimized in real time, that's not something you're going to build. You're going to rely on Facebook doing that or Google doing that or Pinterest doing that, or Amazon doing that. They have those tools or TikTok doing that. Those are platform imperatives, that's not for you to build.

David Barnard:

And to your earlier point, those are actually machine learning models, not generative AI models. And I think Zuck tried to make that distinction on the latest earnings call is that the generative AI stuff is not what's powering the increase in the efficiency of ad spend on Meta, It's actually just getting better and better at the machine learning models. And that is a little bit of a catch-22 right now and a confusion in the industry is that generative AI, chatbots, image generation, all these kind of things, it seems so magical, just plastered over everything, but there's a lot of things it's not good at.

Data analysis and other things where machine learning models, which now just get lumped into this umbrella term of AI like you were saying earlier. And so part of it is also picking the right tool for the right thing. And so to your point, the platforms are going to be so much better at that optimization and they're not using generative AI to do that. They're using these machine learning models that they've built up over decade to do that and all the data that you don't have and all that kind of stuff.

Eric Seufert:

Right. Yeah, and that's really important, the data. And so that's a great distinction. I think a lot of people don't make it, right? So what Meta calls those buckets is core AI and GenAI, and GenAI, to be fair mean they said they have 2 million advertisers using their GenAI products for creative production, but a lot of that is still pretty superficial. They have animations, which is a big deal, but a lot of it is still just Ecom swapping out the backgrounds.

But his point there was that hasn't yielded that much extra efficiency yet because they're still in the early stages of rolling it out. But, what has, and what I called out in my earnings analysis, but I also wrote another piece called AI is not the Metaverse saying, "AI is really generating truly substantive efficiency for advertisers right this second." It's not this far-flung, far-fetched destination.

It's working on behalf of advertisers right now and generating regular improvements or optimizations to their ad campaigns. But those are things like GEM, which they've talked about, Lattice Andromeda, and I had Meta's VP of AI on the show to just talk about all those tools, but it's really interesting and it's like, "Yes, every quarter they're pointing to those tools and saying they drove 5% more efficiency." And I think that's something that people misinterpret because they say, "5%, who cares?" But no, but that compounds, that's 5% a quarter or that's 5% every half year.

And then also when you unlock 5% efficiency in ad spend, what do you get? More ad spend the next cycle, right? And then so that grows faster? So I think people, they don't know how to interpret that. So I would leave that kind of heavy lifting into the platforms. You can't do that. So what can you do? And again, if you're not focusing on the output, you're not focusing on just creating a bunch of creative, that's probably not going to move the needle that much, what should you be focusing on? Just automating tasks, and a big one of those is the way I see people using AI is for creative prospecting.

So just looking at what your competitors are doing, pulling that information into an S3 bucket, whatever that information is, maybe it's ads, maybe it's other stuff, other things that are visible to you, and then using some agent to interpret it. And that would've been a full-time job three years ago. That's a full-time job. And every big scaled app advertiser was doing that. They were looking at the Facebook ads library, they were putting that into a Google doc and they were sending it around and it's like, "Hey, here's what our competitors are doing this week. What lessons can we take from that?"

But now you can do that in an automated way using tools like LLMs to interpret what you're seeing, interpret the concepts from these ads and tell me why you think they're working. And so that's a big thing that I think a lot of people don't appreciate. It's like creative synthesis, but scaled way beyond what people were doing with one person working on that maybe full-time or half-time.

David Barnard:

Interesting vector to be thinking about this on, and to your exact point is that one of the challenges we still have, especially with generative AI crunching numbers, is hallucination. I tried to just get ChatGPT. I used Pro, I used thinking, I still haven't done deep research, but I just tried to get it to translate a list on a web page into a formatted list that I wanted and exclude some things with the specific criteria.

It did great for the first 40 things on the list and then it just started hallucinating and just making shit up, putting stuff in the wrong place. It just really fell down. And so that's a really great example of understanding the limitations of generative AI. You probably don't want to try and have an AI in between your ad buying and making decisions on pricing and stuff today.

There's just so much risk there, but to research creative, especially with competitors and come up with a hypothesis and things like that, hallucinate all you want, we're going to go test that anyway, and the ultimate deciding factor is Meta's algorithm that's going to pick the winning creative anyway, so you're just generating ideas. Hallucination isn't going to break things, cost us tens of thousands of dollars if we're trying to buy ads based on it or whatever.

Eric Seufert:

You know who Jason Lemkin is? He's the SaaStr.

David Barnard:

Oh, yeah, yeah. He's an investor in RevenueCat, yeah, very, yeah.

Eric Seufert:

Is he? Oh wow. So he was chronicling his experience with Replit, building an app, vibe coding an app from scratch, and he was just like every day he was almost like journaling, "Here's what I did today." And then one day he's like, "Project's over Replit deleted my production database. I'd have to start over. So I'm giving up." That kind of stuff happens. I think these tools do really well at very specific discrete tasks.

They don't do well when those tasks are chained together and there's this idea of temperature and machine learning, you want to add a little bit of noise whenever the output is being selected with an LLM, essentially what you're doing is you're doing this very conditional probability to predict the next word. And if you think about attention and the transformer mechanism, what the big innovation there is, look at a very long context chain and figure out how each word affected what word is coming next, but when you do that, you still add in a little bit of noise to determine what the next word could be.

Now you imagine you're doing that and then you're stacking that, and so when you give it, especially a very complex task, predict the next word, it's not that difficult, but a very complex task, you stack that noise that stochasticity, it compounds and that's where you get like, "Hey, here's a forty-step process." Every step in the way, there's a little bit of stochasticity randomness that's being thrown in there, and by the time you get to the fourth, it's like a game of telephone.

The way I like to approach these tasks is very discreet and I'm intermediating everything. Here's the output. "Okay, I'm taking that output and then I'm checking it and I'm giving that back to you." And I'm saying, "Now do the next thing." And so I think we're still in that phase where there needs to be guardrails, either human oversight or just throw it at something where the downside risk is pretty limited and contained.

David Barnard:

Yeah, no, that's fantastic. Any other specific examples before we move on to the other questions of things you see working today within those limitations?

Eric Seufert:

When you talk to companies, especially in the app world about the use of AI, they immediately jumped to creative and I think that's probably the least valuable place to apply this. Yes, you can go from 10 creatives to 200, but a lot of times it's just people are taking their 10 creatives and those themselves are variants and they're getting 200 variants of one concept. That's not actually doing anything for you. There's just diminishing returns on taking 20 variants of one concept and going to 200 and they're like vanishingly small gains.

What you really care about is the concept and actually, it's coming up with concepts that you yourself couldn't come up with because if you could just come up with them, the AI is not doing that much more to influence the performance like aha moment with this was the last company I worked at. I built this tool called Draper, and that's just what it did. It just created variants of ads and this is like 2018, so people weren't really talking about AI at that point.

This wasn't even machine learning, it was just created a bunch of variants, all permutations of these different ads. What I would do is we would just deploy these on Facebook all the time. There's just this constant cycle of this deployment and then we had a stand-up every week with the whole company and I would say, "Here's the ad that worked the best this week. I have no idea what it's going to be." Right? But no person touched that. That was just this process running in the background.

If you could mask experiment like that, then you could find, and then Facebook at that point, VO was a year old, and so that was the way that they did the audience pairing is VO based on the sort of value estimate. I couldn't foresee what audiences was going to be targeted to anyway, let me just feed the beast with as much creative as they need and they find those audiences, those high value audiences and they test all this different stuff. That was the big aha moment for me.

I shouldn't have any sort of preconceived notions about what's going to work. I actually posted about this maybe a month ago and I got a lot of pushback, but my point was given the use of PMax, it's pretty text-based, but Advantage plus, there is no point for you to try to interpret why an ad won or didn't win, right? There's no point. What you should be interpreting is if when you get a win or the win rate increases, the process worked. Now maybe the process took a new input and that's the learning, but it's not the output because that was random, that was utterly random.

Why that worked was utterly random, and if you try to deconstruct it and take a learning from that, you're just wasting your time. What you should do is, "Okay, how am I sort of changing the inputs such that I'm getting a higher win rate and let the machine do its thing." That output is irrelevant, that output cannot be interpreted by you. You can't understand it. You can't understand why that worked. Don't even try. If it worked though, what did you change about the inputs? And that's what you learned, right? The process works, not the ad.

David Barnard:

So I guess part of what you're saying is use this generative AI because it is noisy, because it is going to just come up with random stuff, let it go crazy on concepting to generate ideas that you just would never generate and then feed the beast. To your point earlier, all the major platforms now do all sorts of crazy testing where if you can just feed the beast, and I like that. What's the process to feed the beast and iterate on the process to feed the beast versus thinking you can figure something out and then make 10 more creatives that are going to win because you've found some insight. Just create an insightful process to feed the beast and let the beast do the work.

Eric Seufert:

Because at the end of the day, you have no idea. You essentially have no idea who that ad was even shown to.

David Barnard:

Yeah.

Eric Seufert:

But even if you did, even if you did, and here's where we're heading to. This is really the reason why people push back against this. People are saying, "No, of course you could learn from that because then that creates a feedback loop." And it's a, I don't want to say it's a difficult or complex point, but I think it's just counterintuitive that no, no, no, what you care about was how did you change the process to improve the outcome, not what can I learn from that outcome to then optimize the inputs, right? That's not something you can interpret.

And so that scares people because ultimately it's saying artists aren't important, there's no creativity, it's just this optimization problem. And I think people, I understand why that's uncomfortable and I can understand the pushback too. Look, anytime you're talking about removing humans from this process, it's scary. Look, you read the news, especially in gaming. King, they just laid off whole studio, the company that makes Candy Crush and they said, or some of the people that were laid off said, "We spent the last six months training an AI tool to do our jobs."

And so this is having an impact on employment right now. And so that's scary, right? I had a professor in undergrad, an ecom professor who said, "You can't talk in this sort of dispassionate tone about dislocation from innovation because people are losing their jobs. And so if someone loses their job, they don't care about the rational argument that like, well actually the economy's going to be more efficient because we have this innovative thing." They feel like it's a conspiracy against them personally.

You do have to approach this with total empathy and sensitivity, but that's just the reality and I understand why it's a sensitive topic, but I think where we're heading, and I think again, some people view this as kind of dystopian, but what if the ad was just incomprehensible to you, but it triggered something in your brain? And that's I think ultimately what people don't want to recognize is that's already really the case.

Now, it may be a picture of a shoe and it's a dog skateboarding wearing this pair of shoes and you say, "Oh, people like that of the cute dog." That might not be the reason at all. We're just assuming that we're making a lot of assumptions when we try to deconstruct these ads to understand. And a lot of times they're just post hoc rationalizations, "Oh, I like dogs, so everyone must've responded to this ad because of the dog." We have no idea what caused that impulse to click and we shouldn't try. I think we can't. We must acknowledge that we just can't do it.

David Barnard:

The last thing on AI before we move on to the questions is just where do you see all this going? You've dropped hints along the way. Zuckerberg I think said on an interview or maybe one of the earnings calls that he sees in the not too distant future, and as you just alluded to that, they're just going to generate all the creatives, they're going to generate the ideas. They're going to know the person so well that each ad is going to be personalized to the individual and iterating on creative is just going to go away because they're going to be so much smarter at that. So that's one vector, happy for you to dig deeper into that one specifically, but what other ways do you see this going over the next 12 to 18 months?

Eric Seufert:

My theory is that Facebook could deploy a change tomorrow that achieves what you just described. They are slow rolling this because they know that they have to get people on boarded and comfortable with it. And if they did that push tomorrow where, "Hey by the way, just give us the money and we're taking care of everything else." People would be reluctant because they wouldn't trust it yet. And so they have to be very deliberate and measured with the pace that they roll these tools out. And so on that timeline, just given that restriction, given that comfort restriction, my sense is in 18 months, we're at brand specific full fidelity video ads being generated and without prompt.

Just based on past performance and they're not auto deployed, but you can go through them and say, yes, no, yes, no, and I think that's probably on the 18-month timeline. They could do that right now. People would be very reluctant to use it. Where you go from there is just auto deploy, right? Just do it and I don't need to be a bottleneck, but I think within 18 months, you get to that point. Another hypothesis I have, it would be regulated by comfort. Could you imagine if you just said, "Hey, Meta optimize my landing page. I'll put some code in here. You've got the pixel, I'll import another JavaScript library, and you just render that in real time."

You decide for this user and here's give you some guidelines, I'll give you some guardrails. Here's usually what the onboarding looks like, but you optimize it for that person. I was on a podcast called the Marketing Operators and I was talking about this idea of signal engineering and one of the hosts I could see it really clicked for them and they wrote later on Twitter and said, "The idea of signal engineering is not limited to the existing events that you have, you can do whatever you want. You could create hurdles for the user to clear that potentially are good proxies for LTV."

And if that is, then that's what you do. You created some problem for the user to solve or some hurdle, some obstacle to accessing the product, but the most high intent users, they will do it. And that could actually be a very strong signal of ultimate value. And so one of the hosts puts on Twitter after they created a CAPTCHA for no reason other than to test the user's intent and it drove a 40% increase in ROAS or something, but that's the essence of signal engineering. It's to create that test for intent and if the user clears it, they're a high value user, they send that back to the ad platform and let them optimize in real time against that.

And so I think that kind of stuff is really fascinating. Now, what if Facebook could do that for you? What if Google could do that for you? And would you think that they would do better at that than you could? Yes, I probably would. Now, where that gets a little just political is like, are you going to get the product manager handing that over? Because I think the marketing people have been comfortable with this for a long time, especially at SMB type direct response advertisers.

Everyone that we're talking to right now is like, "Give me, give me, give me." As fast as you possibly can. And someone at P&G or Clorox is saying, "Over my dead body." there's a mentality divide here. But now imagine you start bringing product managers into the mix, which tend to be a little bit more mercurial. They look at themselves as artisans, I think to a greater degree than a user acquisition manager would. And they're saying, "No way. I control this product experience. I control, the first touch point that the user has. I'm going to give that to Facebook? No way." I could see there being a little bit of tension there.

David Barnard:

Yeah, Thomas and I talked quite a bit about Signal Engineering on the last episode of the podcast, and one of his points though was that, and this gets back to PMax, things that Google has done, things that Facebook is doing with their value optimization stuff is that do you really want to hand over control of the outcome to Facebook in that if they know your exact ROAS and they're optimizing to get you one penny of profit on that ROAS, whereas what signal engineering does provide you an opportunity to do is fake that in a way that gets you what you need out of the algorithm while still maximizing for profit on your end versus giving it all over to Facebook, which is I think an accusation a lot of people have made against Google's ad products is that they'll hit your target ROAS and then keep spending, but sending it to all their crappy ad placements that they know aren't going to perform, but hey, they already got you what you asked for.

So they're optimizing for their own revenue and placements and filling inventory than they are for optimizing to your ultimate outcome of being the most profitable that you can be. So that's another level of comfort that UA teams will have to get over and the cynical view of these bigger companies and the way the algorithms are going to perform in order to hand off things like signal engineering to bigger algorithms.

Eric Seufert:

I've clashed with Thomas on this point since 2016 when UAC was rolled out. I wrote about UAC when it was introduced and I said, "Look, the cynical interpretation of this is that their goal here is to maximize spend and to hit exactly your ROAS target." And so that became a lot more relevant. UAC was the precursor to PMax and AAA, Automated App Ads was the precursor to Advantage Plus. So we've been dealing with this for years and years. These tools are not new.

I did this podcast a while back called Commerce at the Limit because people are scared of this stuff for all the reasons I just spoke about. Well, it disintermediates humans from the, what am I here for then? But my point was like, look, this has been happening. This has been the modus operandi since 2016 with UAC. That was a precursor to PMax. The reason they started with the app environment, this is more controlled. The apps go through review.

They're downloaded in the app store you had prior to ATT, you have a totally transparent view line from the ad to the usage, and so they could just much more easily in this sort of sandbox environment, understand all those signals and automate it. It got hairier with Ecom. But look, they're still people working on apps that didn't displace everybody. Things got a lot more efficient on the marketing side. I wrote this piece called Satisfyser's Remorse, right? Where this idea that like, "Do I wish they were optimizing for my spend adjusted ROAS?"

Yes I do, but I also understand that if I was running this, I might not have even been able to spend as much as I did at that level of ROAS. They're probably more efficient. So that satisfies, I'm accepting the outcome that meets my ROAS target. And then the remorse part is like I know they could have done better, but I couldn't have. I know if they were a nonprofit, I'd be making more money than I am now.

If they were a nonprofit and they were just looking out for me exclusively, I would be making more money, but they're not a nonprofit. If I was in charge of doing it, if they didn't give me the benefit of these tools, I probably would've performed worse. And I think when people say, "No, I could do it better than they can." For the most part, I think most people are kidding themselves.

David Barnard:

Yeah, yeah, yeah. It didn't occur to me until just listening to you talk through this, but the ultimate dystopian, especially for the app industry end game of all this is that at some point, Meta and Google know better than even we do the goal and the outcome that the individual wants out of a piece of software and then they just generate that in real time for the user. That's very dystopian, very far off because you do have a lot of problems around, like you said with Jason Lemkin where the AI just deletes the production database.

I think we're a long way from fully AI generative experiences that's going to have persistent storage and track your food and macros and calories over time. It's like as you were saying that, I was thinking of your earlier statement about the team at Keying that was training the AI that eventually replaced him. In some ways, we're going to be teaching Meta's algorithms better and better, exactly what the user outcome is and what they care about and what they value. And that's very valuable data, very valuable information, but I think that's a long way off, if not impossible. So we'll see.

Eric Seufert:

You could take this to any sort of extreme you want. I prefer to focus on because I'm like an AI maxi, right? I believe this is transformative. I think this is going to benefit society inflection point for the better. What I like to focus on and taking it to that extreme where Facebook is just producing an app for me to download in real time and all the benefits accrued to Facebook was like, "Okay, well then no one's clicking on anything. No one has any money."

Then the economy collapses. What I think is really exciting though is this idea that anybody can be an advertiser. So if anybody can be an entrepreneur because of these tools, then it actually becomes a lot more difficult to get attention because you're competing with a lot more products. And so that's an issue on the app side, but I think there's a whole class of entrepreneurs that exist already that are doing things like lawn care or they run a barbershop or they run a bike repair service or something and they're not advertising because it's just out of scope.

It's totally unrealistic But what if it actually was just I'm typing in a prompt for what my business is and I'm actually not competing with that many people because it's more of a locally oriented business. And so this is really just unlocking a new group of existing entrepreneurs that can be onboarded into the advertising economy and could benefit from that and drive more business as a result. And so that to me is really exciting. It's like you just empower a whole tranche of people that run small businesses, truly small businesses, one person companies or just all locally oriented companies. You empower them to reach as wide of an audience as is relevant, and that's really exciting.

David Barnard:

Yeah, and we've already seen that. D2C is a fantastic example. The proliferation of very niche products. I don't use Instagram a ton, but when I do, they're so damn good at finding the exact product that is missing from my life. It's ultimate consumerism and stuff. Do I actually need it? All that aspect of dystopia and whatever. But there's some really cool products that only exists because Facebook allows those creators to reach an audience like me that's niche enough that they would've never been able to sell it in a grocery store or sell it at Walmart, but people like me actually want that product.

And then there's infinite niches like that that exist in the world, and I know people go back and forth, and you're also an ad maximalist and a lot of people just wish advertising didn't exist, but this is the getting attention for this innovation. Should that just be free? Should you be able to build something? And then how then do you get attention for that? And advertising is a very effective market-driven way to build something innovative and then get attention for that thing you're building.

So I go back and forth a little bit. I'm a little creeped out sometimes on all the data collection and everything like that, but man, I bought my Meta glasses and freaking love those things and created, it's like at some point there is a benefit to society more broadly, the benefit to individuals. There are negative externalities, there's problems. It's not all sunshine and unicorns and stuff, but yeah, it's like I think understanding fundamentally how advertising creates new markets and empowers entrepreneurs, it is really powerful and it's important. And too many people just crap on the ad industry without really understanding it, and what it does generate in consumer surplus and Facebook's not taking all the profit.

People wouldn't come advertise with them if they were taking all the profit. Now, are they trying to maximize their profit? But can you also maximize your profit on top of that? And the answer is yes. That's why there's been a proliferation of entrepreneurship and these DTC products and everything else like that. So yeah, I'm not quite as a maximalist as you, but I appreciate its role in the broader market and in empowering entrepreneurs. And I think exactly to your point, it's like we're going to see that accelerate not decelerate in the coming years.

Eric Seufert:

One argument that I just find disingenuous in the extreme when people understand this market is like, well, advertising has always existed. It didn't need data. You could always just advertise on TV. You could always just advertise on radio or magazines. You didn't need all this data. All this data collection is just a privacy violation. It doesn't change the fact. It's not enabling advertising. It's like, yeah, that form of advertising always exists. It continues to exist, but you know what didn't exist? D2C.

You couldn't have D2C without personalized advertising. It would not be possible for the reasons that you pointed out. It's too niche. It's too niche. You can't do a TV ad campaign, a national TV ad campaign for this niche product. The economics won't work, but if I could reach the individuals that would find this relevant, and by the way, when you reach those people, the click-through rates are still sub-five percent and people look at that as an indictment.

No, that is not an indictment, that's showing you the natural of reluctance or friction there is to advertising in the first place. If there was actual manipulation happening, those click-through rates would be 100%, right? If this person is deemed relevant and I was manipulating them into doing something they wouldn't otherwise do or they don't want to do, then the click-through rate would be 100%. The conversion rate would be 100%. It's sub 5%.

On a good day, it's four. For most products, it could be sub-two, sub-one, and those could be profitable campaigns. But the thing is, this enabled new sectors of the economy and that is growth. Are there bad aspects? Of course, but you don't throw the baby out with the bathwater. You identify the things that you want to remedy and in a surgical way, you remove those from the workflow. You don't just throw everything out and trash everything out and people push back.

When I make these sweeping statements and they say, "Ah, that's a straw man." No one's actually looking to kill personalized it. Yes, they are. Yes, they are. There is a bill that was just resuscitated called the Banning Personalized Advertising Act. They did ban it in the EU. They did. Effectively it's banned for the biggest platforms for the gatekeepers. And so yes, there are people that want to ban personalized advertising.

David Barnard:

I'm the one who stirred the conversation to D2C. But to be honest, the app industry, the proliferation of the app industry and subscription apps specifically, Meta and Google deserve almost as much, if not more credit for the proliferation of apps and the variety of apps that we see today and so many profitable app businesses for the same reason that they empower D2C.

It's like they allowed app developers to reach those audiences more effectively, more efficiently. What we see today as the app industry, I think owes a debt of gratitude to meta and to Google because they empowered it just like they empowered D2C. Well, you and I could talk about these things for hours, but I did want to get to the questions since we did allow people to submit questions in this conversation.

Eric Seufert:

Question one. Question one.

David Barnard:

Question one, an hour in. Given the dominance of a few paid UA channels, how risky is it for a FinTech subscription app and I would say any subscription app to have over 80% of its spend on Meta and Google? What practical diversification levers actually work in this vertical? So first, is it risky? And then if so, how would you diversify?

Eric Seufert:

So I wrote a piece about this a while back a couple years ago, and the point I make was I think people feel compelled to diversify because they have this abstract notion that being totally concentrated in one or two channels is a bad thing. And yeah, there's risk there, but the reality is that diversifying adds a lot of overhead. It adds overhead in terms of having, doing data integrations and having to create new creative formats and having to, a lot of times, these companies, each platform has a different way that the ads are exposed.

And so you have to sort of accommodate your measurement to that. And so diversifying for the sake of diversifying is oftentimes a bad idea. And the question is, and then how much could I spend? So I might on board. A lot of times, you hear this kind of common refrain, well, the performance is great, but at really low spend. Well, okay, but that's actually not great. What I really care about is my spend adjusted ROAS and not just that the ROAS is high on this particular channel at low levels of spend because could I have taken all of that overhead, eliminated it for supporting that new channel and then allocated that budget to Meta or Google and seeing the same level of ROAS? Because if I did, I'm actually worse off if I could have.

So it's actually when you have a new channel, it's the ROAS doesn't have to meet the ROAS of the other channels that could have absorbed. That budget has to exceed that because you're supporting a new channel. And so I think diversifying for the sake of diversifying is often a mistake. You diversify when you've reached saturation on the existing channels, I think, and then you look for other channels, or when you feel like there's some interaction effect that that channel could produce that boosts the performance on other channels, and that oftentimes is the case.

Oftentimes, that's the benefit of running on some other channel, particularly if it's more like brand oriented channel. So I think that's the way I like to think about it is I wrote a piece a couple weeks ago called optimization models and digital advertising. I talk about optimizing towards ROAS and optimizing towards ROAS adjusted spend, and they're very different things. And so if I'm optimizing towards ROAS, then I really want to keep spend as low as possible, spread across many different channels because I'll get the max ROAS per channel because the ROAS and the spend tend to move in obstructions.

But that's oftentimes it's not really what I'm doing. I'm just optimizing towards maximizing spend with a ROAS constraint. And in that case, what you want to do is this what I call the waterfall method. Max out the biggest channel until it hits my ROAS threshold, then move on to channel two, which would be smaller potential spend there. Max out channel two until it hits my ROAS threshold, move on to channel three. That's always the approach that I recommend companies take because it minimizes overhead and complexity.

David Barnard:

So what do you see as the risk or do you think it's just actually not risky? The risk is just missed opportunity? How would you classify risk and how people should think about the risk of being so dependent on one or two channels?

Eric Seufert:

So yeah, it's missed opportunity. You could call it opportunity cost risk, but what most people mean when they talk about the risk there is just that performance degrades. And if I've got five channels and one of them bombs, then okay, there's 20% of my spend at risk, but if I've got one channel and it absolutely tanks, which can happen, then that's all my spend. Now the issue here is these things tend to be pretty correlated.

Why would performance bomb on a channel? Probably because a competitor came in and is outbidding you everywhere, and if they're outbidding you on Facebook, they're probably outbidding you on Google and Snap and TikTok and wherever else. So the issue is I think the risk is not necessarily per channel, that's what I think a lot of times people think about it that way because everyone's had experiences we're like, "Hey man, what happened to Facebook yesterday?"

Click-through rates got cut in half or they cut down by two-thirds or for whatever reason and there was just like a blip. But if you're thinking a structural change, a structural change to performance, that probably would be correlated across every channel because there's either just a change consumer sentiment or more commonly a competitor came in and just crowded you out.

David Barnard:

All right. What's the single biggest pitfall you see across mobile growth teams right now? Something that even sophisticated companies consistently overlook?

Eric Seufert:

The thing that I see most commonly is just a very sort of chaotic approach to measurement. Just lacking any sort of coherency that could materialize in a couple of different ways. Either I see people using competing tools, essentially not really knowing which one to trust or how to interpret the output of one relative to the other. I see this as just misalignment across the various stakeholders.

So oftentimes finance and UA or UA and product and not really being totally aligned on what good looks like, what these metrics should look like for success. I see it as having a bunch of tools that aren't working in concert. They're just a couple different data points. I don't know how to interpret them as an ensemble. I'm just looking at the individual ones and I don't really know what these things mean as a whole. I call it like measurement disorganization is the most common thing I see, and I think the way to overcome that, it's like this is not a satisfying answer and it's sucks as a process.

I don't do that much consulting anymore, but this was probably the most common project I was asked to come in on and it's really, really challenging and it's stressful getting all the stakeholders together and then just coming up with some sort of plan that satisfies everybody, some sort of model for measurement. And when I say model, I don't mean a machine learning model or whatever, a regression model, an operational model for how we're all aligned around what success looks like and all of our needs are being met by this measurement apparatus because I think one thing that companies just tend to underappreciate is the fact that your measurement model, your measurement system is essentially the heartbeat of the company.

Everything flows from that and you really need to be doing it correctly and in a way that's credible, but also in a way that sort of everyone understands and appreciates and that serves their different use cases. It's like getting a group of people together in a room and just finding alignment on that, that's the solution. And that sucks. That's difficult people oriented work. It's not like, "Oh, well, we'll come up with a new machine learning model." Or, "We'll build this news dashboard."

It's like, "Let's get a bunch of people together, understand their needs." The CFO has got totally different needs than the UA team and the UA team's got totally different needs of the product team, and let's just get everyone in a room and understand what they need to receive as output and also get everyone to agree on what good looks like.

David Barnard:

That's a great answer. I wouldn't have guessed you'd go into the people side of things, but we're just bags of meat making decisions and that's a huge point of friction, a huge point of challenge and disagreement. So one of the things we actually see at RevenueCat is we sometimes have a marketing team come to us specifically because the engineering team just won't align with them on their priorities.

The engineering and product teams have their own priorities and UA and the marketing teams are just like, "Hey, go figure it out." They're left out on an island that can't get engineering resources, they can't get time with the product teams to push to get the engineering resources. And so yeah, it's a problem we see in prospects coming into RevenueCat trying to solve the engineering problems because their engineering teams just won't give them the time of day.

Eric Seufert:

The canonical example in my mind, and I've experienced this and I have Vietnam-style flashbacks of this very difficult, challenging human-oriented problem is product team on board, a different LTV model or a different LTV product because they don't trust the one the UA team is using. That's the kiss of death for productivity. You're going to spend the next two months arguing and that relationship is over.

David Barnard:

Yeah. Man, it is a great place to go solve your people problems. Technology is not going to fix everything you need to solve the people problems in concert with solving the tech problems and data problems and all the other problems. All right, next question. In subscription apps, and then again, this maybe was the same person, especially in FinTech, but this probably apply to all apps.

You often have to make budget allocation or bid changes before key metrics like LTV, retention or incrementality are fully baked sometimes within days of launch. How do you balance speed versus accuracy in those early optimization decisions? And what frameworks do you recommend for making confident calls with incomplete data? That's a really good question.

Eric Seufert:

Yeah, that is a good question. So a couple things, right? So one is, I wrote this piece a while back called It's Time to Retire the LTV Metric, and it's a little bit of just a clickbait headline. I don't really mean that. What I mean is what you often see teams doing is they'll try to calculate a terminal LTV on the basis of 10 days of data or something, you only launch one time, but a product.

So you can just kind of wait and extend the soft launch. But when you're talking about a campaign level LTV, it's like, "Well, okay, I'm not going to get that quickly." And so what I like to do is just try to, because again, ad spend and ROAS tend to be inversely correlated. So as ad spend increases, ROAS goes down. And so what I would like to do is just prove out these frontiers and so I'm going to spend 5K a day or whatever, and if I'm hitting 150 ROAS, great. And then what I'll do is let those cohorts age understand how they progress, understand what their day 20 ROAS is, understand what the day 30 ROAS is, and just build more and more cohorts where I can track them over time, and then I'll push that ROAS frontier out.

So now I'm not trying to hit one 50 on day three or 200 on day three. Let me just see, actually these cohorts seem to be progressing to where they're at 150 by day 30 or so. Okay, that's great. Now what I'm going to do is I'm going to push and then I change my bid accordingly and grow the budget. And then what I'll do is track the cohorts more and see where they land at day 60. It's like, "Okay, well that's 120, so I'm going to increase the budget more." And then that'll decrease the ROAS.

And so then what I really care is where does that land at one tan or something. But really what I'm trying to do is just iteratively progress that frontier of my ROAS target starting from a place where, "Yeah, okay, if I can't hit 150 day ROAS at day seven with very low spend, probably not going to be able to grow this." And so it's back to the drawing board with whatever I'm adjusting. That's the way I think is the right way to approach this, especially in a launch phase.

If you're just talking about launch, flighting new campaigns, really what you're talking about is the performance of the creative. And it's a question of, "Well, when to kill a creative as soon as it's obviously not a winner?" And that's oftentimes very quickly, same day, it could be next day, this is getting no delivery, not a winner. Maybe you could perform at the average or whatever, but it's not going to grow my ad spend, so it's a loser and I kill it. The way I approach creative testing is really just I'm trying to identify losers as quickly as possible. The winners take time to prove out, but the losers are pretty quick to prove out.

David Barnard:

Yeah, great answer. Next step, what's one opportunity for growth you think most companies in the app industry are missing right now? Either because it's too early, too messy, or doesn't fit into standard user acquisition playbooks?

Eric Seufert:

The real answer is the opportunity for growth is just that your measurement doesn't support true growth. It's broken or it's flimsy. And so you're just really trying to replicate what you've done in the past. If you don't believe that your measurement can adapt to new channels, new sources, new ad types, you're just going to do what you've been doing and by definition, you won't grow. And so if you need to break out of that, you need to build truly robust incrementality focused measurement. And that's the challenge, right? The challenge for the human reason we talked about is a challenge, is a technical challenge too.

So that's the real answer. And then assuming that's in place, then you have the world as your billboard. I can advertise in influencer channels. I can do out of home, I could do digital out of home, I could do CTV, do podcasts, I could do all these things that are supported by the measurement, but oftentimes that lack of growth, that stasis is just a function of I don't trust the measurement to capably run attribution on these new channels or interpret the changes across the portfolio in a reliable way. And for that reason, I'm just going to keep everything the same. And so that's oftentimes what I see is people are just paralyzed as a function of their measurement not being robust.

David Barnard:

The fundamental problem in marketing, half my marketing is performing, I just don't know which half and that will be the perpetual problem that needs to be solved in marketing.

Eric Seufert:

You can say that this is your podcast, but I've actually banned that phrase from my podcast. I just feel like it's too reductive and people misinterpret it. I love that phrase. I hate the way it gets used. The way that people use it is to say, this measurements all smoke and mirrors. No one really knows. We need to be telling a story. We need to be connecting emotionally. The only ROAS curve I care about is the curvature of a satisfied customer's smile, that's marketing.

And this measurement stuff is just a bunch of hocus pocus. That's not what he was saying. John Wanamaker was a pioneer of advertising. He's one of the first people to take out a full-page newspaper ad. He understood measurement. What that statement is saying is that's not a bad thing is I understand the outcome given the inputs. I don't know the mechanics of it. I don't need to know necessarily that.

David Barnard:

No, no, you're 100% right, and I love that you brought that up because it brings up such a good point is that there will always be some amount of uncertainty in marketing. The good marketers embrace that uncertainty, not in a way like you said, of just like, "Oh, throw their hands up." And like, "Oh, we're just going to do brand stories or whatever."

They embrace that uncertainty to say, how much can I be certain on and what are the certainties I can build a process around? And that's what good marketing is. It's not throwing your hands up, but it's figuring out how to be more and more certain, but within the constraints that you understand, you can't get to 100% certainty. The people who think they can be at 100% certain are just as deluded as the people who think they shouldn't be measuring.

Eric Seufert:

So there's this concept of Wittgenstein's ruler, right? And I've talked about this a bunch, but I wrote a piece about it years ago called Wittgenstein's Ruler and Ads Measurement. And to your point, so the idea of Wittgenstein's ruler is like I take a ruler to measure a table. Well, if I don't trust the accuracy of the ruler, the table is also just measuring the ruler. That measurement apparatus isn't telling me anything about the table.

And in fact, the thing that I'm measuring might tell me more about the measurement tool. And I say that because that Wanamaker quote, the one reaction is throw your hands up. It's not possible, so let's go do brand campaigns and go to Cannes and Sip Rose. And the other reaction is, "Oh, good point. I should only do marketing on channels that are deterministically attributable." And then what you do is you fool yourself into thinking that anything is deterministically is not.

And so a lot of times I'll meet teams. It's like, "Hey, we cracked the code, we found a loophole and we're running all these campaigns and we're getting this deterministic attribution using all these hacks and workarounds." I'm like, "You're kidding yourself. You're telling me just more about your inability to understand what's actually happening than you are telling me about how precise your measurement is." And that's a very bad signal.

When you get teams that are building a consumer product and they're looking for investment and it's like, "Well, we've got a secret sauce for advertising because we figured out how to hack these signals together to get deterministic attribution." I'm like, "No, you just convinced yourself of that, but actually you're just getting noise." And you're going to be wasting a lot of money. And that's probably no better than just doing more of a holistic probabilistic model.

David Barnard:

That question went places I didn't think it would go, but I'm glad it did because that was really fun. Last question and we'll wrap up. What's one area you see growth teams pouring too much time and budget into that you believe will matter less in the next two years?

Eric Seufert:

Gen AI creative tools. I think you could use off the shelf stuff and get almost all the value that you're going to get. Again, I don't think there's that much value there, period. Unless you're doing more of the fundamental stuff that we were talking about at the outset, which concepting and that there are not really any off the shelf tools for you.

You have to build a system yourself. Maybe there will be at some point, that's probably a good startup idea, but just cranking out, we went from 50 variants a week to 200 or 2,000, but they're all the same concept. There's no value add there with the last 1,800 or something. You know what I mean? That's one thing.

David Barnard:

It's the testing 50 shades of blue, like the incremental lift. There's so many other problems you could be solving for than testing that 50th shade of blue.

Eric Seufert:

Yeah, exactly. And then the other thing is why are you investing any more time and ad attribution kit? You think that's a genuine source of value or a competitive advantage? Don't burn resources on that.

David Barnard:

It is so baffling to me, and I don't want to dig up this can of worms because we could talk about it for another two hours, but it is so baffling to me that Apple went through all of the things they went through. Even the negative press, the tumult in the industry, the loss of app store revenue that ATT caused and then didn't actually build something useful for attribution and then just let everybody fingerprint anyway, which is almost more insidious. I, get it.

I think the one thing that came out of ATT that was good is that it did at least break a lot of the data broker workflows that you could determine necessarily find one person and just track them everywhere, even when they're on their cellphone and the IP address is different or whatever. Anyway, I don't want to dig up this whole can of worms, but it's just such a mess and it just baffles me to no end that Apple didn't take that opportunity to build an attribution tool that actually mattered and then just let everybody fingerprint, it's so baffling.

Eric Seufert:

I have a couple theories here. I think one of them though is their hands are tied from a privacy perspective. If you want to honor the sort of religious zeal that they have towards privacy, and I do think they're genuine about that. My interpretation is a competitive maneuver. That was the motivation there. I don't think it had anything to do with, truly had anything to do with privacy, but once you invoke privacy as the stalking horse for that competitive maneuver, then you've got to adhere to the privacy principles of the company, which are genuine. And then when you're trying to build attribution that way, you just kiss, not functional.

You take their commitment to differential privacy. That just breaks the data set. Even if you don't implement it in a way that truly sort of adheres to it, the principle is there, even if you do that. Okay, it's sort of superficial. It still just breaks everything. And then you introduced this sort of crowd anonymity that they did, which is kind of like a reverse form of differential privacy. And then you had the scheduling stuff.

They couldn't implement this to achieve what they were trying to achieve, which was competitive disruption without applying their culture of this privacy zeal. They just couldn't do it because they portrayed it as this privacy maneuver and therefore they couldn't do SKAdNetwork, now add attribution kit in a way that is actually functional because when you have to apply all these privacy protections to it, it breaks it.

David Barnard:

Yeah, I'm resisting the urge to dive back in and talk for another 30 minutes on this topic.

Eric Seufert:

Part two.

David Barnard:

Yeah, but man, Eric, this was so much fun. I really enjoyed the conversation. I think there's so many things for people to take away, if not practical, which I feel like we got to a lot of practical stuff. I think it's really important to think at a higher level, at almost philosophical level about a lot of these things, which I think then plays out in your decision making being better because you have a better intuitive sense of how all of this works together as a market, as a economy, and so yeah, on so many levels. This was such a fun chat, so thank you for joining me today.

Eric Seufert:

Cheers, man. Always a pleasure. Hope to see you in Austin soon.

David Barnard:

Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by at chat.subclub.com to join our private community.