On the podcast: the experiments behind Mojo's 60% lift in ARPU, why a winning paywall in Japan completely failed in the US, and why not relying on day one for most of your revenue is actually a strength.
This conversation is shorter than usual and will be featured in RevenueCat’s State of Subscription Apps report. Each episode in this series will explore one crucial topic and share actionable insights from top subscription app operators.
Top Takeaways:
🌍Show free users a paywall every week after onboarding
Triggering a paywall on app open once per week for free users drove 15% of new revenue with no backlash. The more generous your free tier, the more users tolerate the ask.
💪A winning paywall in one region can completely fail in another
A long, detail-rich paywall lifted revenue 20% in Japan but flopped in the US, where cleaner designs with punchy copy outperformed. Always retest winners in each market before rolling out globally.
⚡Experiment velocity is a huge unlock for revenue optimization
Running parallel paywall tests across geo segments on a weekly cadence compounds gains fast. More iterations mean shorter feedback loops, faster learning, and fewer months leaving revenue on the table.
About Michal Parizek
🚀 Senior Growth Product Manager at Mojo, a mobile-first content creation platform that empowers businesses and creators to produce professional, animated social media content in minutes.
👋 LinkedIn
Episode Highlights:
[0:00] Introduction to Michal Parizek, Senior Growth Product Manager at Mojo
[1:02] How Mojo achieved a 60% increase in average revenue per user
[2:16] The impact of paywall design experiments on Mojo's revenue
[3:31] Why the same paywall design worked in Japan but failed in the US
[4:45] Mojo’s global pricing strategies and the role of regional differences
[5:45] How Mojo optimized early revenue with the 7-day ARPU metric
[7:02] The role of customer feedback in shaping Mojo’s growth strategies
[8:15] Testing different pricing models: How Mojo decided on the $79 price point
[9:30] Why focusing on new revenue, rather than renewals, was crucial for Mojo’s growth
[10:45] The benefits of running paywall campaigns for existing users
[12:02] How Mojo balances customer experience with aggressive monetization strategies
[13:15] The importance of experiment velocity and fast iteration in scaling Mojo
[14:34] Surprising results: Mojo’s success with paywall strategies for existing users
[15:41] Closing thoughts on scaling an app with data-driven experimentation and customer focus
David Barnard:
Welcome to the Sub Club Podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors, and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat, thousands of the world's best apps trust RevenueCat to power in-app purchases, manage customers, and grow revenue across iOS, Android, and the web. You can learn more at revenuecat.com. Let's get into the show.
Hello. I'm your host, David Barnard. Today's conversation is shorter than usual and will be featured in RevenueCat's State of Subscription Apps report. Each episode in this series will explore one crucial topic and share actionable insights from top subscription app operators. With me today, Michal Parizek, senior growth product manager at Mojo. On the podcast, I talk with Michal about the experiments behind Mojo's, 60% lift in average revenue per user, why a winning paywall in Japan completely failed in the US, and why not relying on day one for most of your revenue is actually a strength. Hey Michal, thanks so much for joining me on the podcast today.
Michal Parizek:
Hey, thanks for having me.
David Barnard:
You've wrote a blog post on the RevenueCat blog a while back about how Mojo increased average revenue per user 60% in five months, and I've been wanting to have you on since reading that blog post because there are so many little things in there I think people could take away from and not everybody's going to read the blog post. And it's fun to dig deeper into the things behind what ended up in the blog post, so let's start there. What are the things that led to that 60% increase in average revenue per user?
Michal Parizek:
That was actually a bunch of experiments we did on mainly paywall and pricing, and they were particularly free experiments which stood out and brought that pretty good lift in ARPU. One of them was a yearly plan as a default. So initially on a paywall we've showed the yearly and monthly plan next to each other on the very first screen, very first of paywall screen. And then we've tried putting the monthly plan under a view all plans button so it wasn't really visible on the first glance, and that really helped drive our yearly plan adoption a lot. I think a 15 or 20 percentage point, so that helped a lot actually increasing the new revenue obviously.
David Barnard:
Anything else that was part of that, that you think really drove that success, or you think you just... A lot of people experiment with this, but for some people it does seem to work where it's like the yearly plan is listed with a monthly amount, and then the monthly plan is listed at a much higher amount. Did y'all test that at all of having the monthly plan maybe way higher to where it makes a yearly plan look even better, or what won in that?
Michal Parizek:
Yeah, yeah, we experiment with that as well. Yeah, so what we've tried and succeeded was actually a monthly plan incurring, so we essentially added a small line next to our yearly plan, which says that's the price is equivalent of, I don't know, $10 a month. And that actually worked very well in comparison into that monthly plan, which was obviously higher. The price was 25, let's say, dollars. So that showed a yearly plan is such a good deal comparing in the costs. So that actually, that worked very well and interesting, well, that actually worked very well, particularly in Latin America, in Brazil and Mexico.
David Barnard:
That's super [inaudible 00:03:46]. Why do you think it performed better in Latin American countries than the US?
Michal Parizek:
Yeah, because in the US it worked as well, but it was, the increase was, let's say, a little bit less. It was 10% lift in your revenue and Latin America was way higher. It was actually about something definitely double-digit, was 30 to 40 percentage and I think actually it worked better there because that's just my hypothesis, but I think because the purchasing power of those markets is lower than in the US typically. So people tend to care more about what's the price there and they tend to basically care more about the costs of the apps and the subscriptions they have. And if they see that this doesn't cost much when they actually look at it from let's say monthly perspective or what they pay by month, it just persuaded them more and triggered more the conversion behavior there. So, yeah, we were actually super happy to have there and added their pay and the online paywall.
David Barnard:
Had you already experimented with lower prices in those countries or was it a similar equivalent price to us prices?
Michal Parizek:
No, we did actually. It was under another key experiment which drove the 60% increase in ARPU that we did a lot of price testing as well. And what some of those actually include, also Latin America countries like typically Brazil and Mexico, which were the top two geos on that region. And so we've actually lowered the price so we didn't have the equivalent of the US prices. That's something what I think usually App Store suggests. They basically just do the exchange rate, right? And then calculate, say, "No, no, we didn't..." Well, we originally had that I think in the past, but, and then we also tested a bit lower prices and it turned out it was actually better.
David Barnard:
What was a key metric you were tracking during this? In setting up all these experiments, you're testing price, you're testing paywall layout, you're testing placements, do you have the monthly and annual? You're testing all these things. What was the unifying metric you were looking at to determine the success of those experiments?
Michal Parizek:
Yeah, so our umbrella metric for all those monetization experiments was the average revenue per user in the first seven days. So this ARPU7D, 7 day, and yeah, we've intentionally used it because we, well, first of all we saw more potential in optimizing the new revenue rather than renewals, and just because that we wanted to shorten the payback period, we wanted to optimize the new revenue because of also supporting the user acquisition loop to allow higher spend, et cetera.
So we want to drive new revenue as we saw that we can actually scale that more, and we can actually compound that with the user acquisition and get more revenue in total. Yeah, so that's why we chose the early average revenue per user and specifically seven day because at that time we had three-day trial, so we wanted to have the window to be long enough to cover those three, four days. And I think it was just seven day. I think mainly because actually RevenueCat showed one of these ARPU pre-default is actually, that's actually maybe the first one is actually is what was available is seven day. So we choose ARPU seven day just I think because of that. And that actually, so they worked pretty well. Yeah, so basically we optimized for the new revenue, and as you probably know and everyone else in the app business, lots of new revenue is coming from the very early days, the very first few days. So it was a good metric for just tracking new revenue.
David Barnard:
Did you look back on some of those experiments and see the impact on retention? So did you knowingly sacrifice some long-term revenue for that quick return on ad spend?
Michal Parizek:
Typically where we look at retention or at least something like a proxy to retention, we use typically seven-day cancellation rate as a proxy for retention rate or for renewal rate, what it could look like. And we typically look at this proxy when we did price testing because I've seen data that, do you have any tests at different prices, particularly the higher prices? Usually you see higher cancellation rates and lower renewal rates at high prices. So it's quite reasonable. And I remember a couple of tests where we actually tested a different price and the price actually, mostly a high price, actually turned out to be the winner on the new revenue, but when we actually modeled having the new price for a year long, calculating a bit long-term revenue, and we saw that we would actually sacrifice in the long-term mainly because the renewal rates just dropped because of the proxy.
Seven-day cancellation rate just was way higher than for the baseline price. So then we decided not to do that and kept the original price, and sacrifice a bit of less new revenue, but I think more new revenue in the long term.
David Barnard:
One of the things I talked about in another one of these State of Subscription Apps podcast was how a lot of times experiment results don't stack. So you get a 10% win here, and a 20% win there, and a 15% win here, and then you look at it at the end and you actually haven't raised average revenue per user by the sum total of all those experiments. You're getting 10% here but losing 5% there, and getting 20% here but losing 10% there. What do you feel like was the key to actually stacking those successes? Were you looking back as you tested each of those to see if it had a negative impact on some of your previous tests, to be able to generate that, such a large increase in average revenue per user?
Michal Parizek:
Essentially executed a bunch of paywall and then price tests, and then whenever we've seen there is a positive results, whenever we've seen there is there's a statistical significant winner, usually if we've retested also in other markets. So usually let's say we've tested in the US or in Europe, and if it works there then we've retested in other, our key markets to make sure that actually it will work there. And it happened a couple of times that something we've only rolled out in some of the markets and not, let's say, all over the world just because we haven't really seen a positive impact. Sometimes also negative.
So we make sure that particular change is actually bringing additional incremental revenue in all those key segments are, according to geo typically, and if we haven't seen that then we didn't rolled out. And then in a high level we've monitored the ARPU seven day, typically actually in the RevenueCat charts to essentially make sure that for the new cohorts of users we actually seen that lift in the ARPU. Of course it's sometimes hard to isolate it from UI changes from other externalities but, and we try to, our do best to see if actually after rolling out we're actually seeing that.
Yeah, luckily we actually saw that we implemented changes and we seen that it's actually going up, the ARPU, and we were happy to see that, and kept that process, keep going.
David Barnard:
Were there any surprising results, like something that worked incredibly well in Brazil but was terrible in the US, or something amazing in Europe that failed in Latin America?
Michal Parizek:
Maybe one surprising thing was something what we've tested particularly in Asia. It was actually in Japan. It was a long scrolling paywall with a lot of information. There was lots of social proof or reviews. There was a clear comparison between the free tier and the pro tier, and that design actually worked incredibly well in Japan, driving, I think, 20% lift in new revenue. But actually the same design failed in the US. Actually in the US, more easy to read or more cleaner design with a slider and the videos in the background, and just very punchy messages work much better actually than the very in-depth and very descriptive design for Asia.
David Barnard:
Yeah, that's fascinating. I did want to talk now about placements. So of course as with most subscription apps these days, Mojo does show a paywall on onboarding, but I was surprised to see the stats that only 50%, and I say only 50% because in the RevenueCat data we see more like 80% of payers across the entire subscription industry happen in that first day. So what else have you done to drive those conversions after that initial onboarding?
Michal Parizek:
Yeah, that was actually the number one thing which surprised me the most when I actually joined Mojo. It was my first mobile-first business, a job I did that actually there's so much actually revenue coming from the very first day, but then I learned actually it's very normal and it's even lower than we just said, yes. Yeah, most apps do 80% and I think actually that the number, 50% is actually a good signal that, if the app actually does that, I think it's a very good signal for the app that it can actually drive a good revenue also from existing users. It's likely have a good retention and essentially a lot of existing users and it's, sounds to me like a really very healthy behavior and signal. There's a, I think, couple of things, I think why Mojo has 50%. One is that the free tier is actually, I think it's pretty good also comparing also some other competitors actually in the free tier.
It can actually, it can do quite a lot. So it's not that restricted, like maybe some other apps. There's lots of features which is actually available in the free tier, lots of content. So that's one thing. So the, let's say, generosity of the free tier and just intentional. The other aspect is that we've run also the paywall campaigns for existing users, which I think not a lot of, maybe not most of the apps actually do, and think it's actually one of the underrated things, which I think most of the apps should do it.
So essentially it's running the paywall campaign, so as you're triggering the paywall and certain behavior for existing users, either as an app open, like we did in Mojo, or after some key behavior event, like when you do something key in your app, like, I don't know, sharing something or whatever the key engagement event is. So that paywall campaign actually drove, I think, about 15% of new revenue from the existing user base, which is just pretty a lot and it's super simple to set up. And it wasn't really having any negative reviews on that from the users. So it's a no-brainer to have that.
David Barnard:
Yeah, that's surprising. So free users, they open the app and immediately see a paywall.
Michal Parizek:
Yeah. Yeah, they hit a paywall. Yeah, and actually it worked, yeah, I was... And quite a lot of users actually, yeah, it actually drove a lot of the revenue. Yeah.
David Barnard:
Did you check, churn and other things? I guess that's the thing is you do always trade off, so maybe there was a little bit of churn, but then the additional revenue made up for that. But any other things you track? You said there were no negative reviews, which that's shocking to me. People seem to go out of their way to complain about things like that in the App Store reviews, but support retention, any downsides to this paywall on app open?
Michal Parizek:
It was on app open, but it was, I think the frequency is set to one paywall per week. So you don't actually, even if you open the API every day, and multiple times a day, you just essentially have that paywall only being triggered [inaudible 00:15:36] once a week. Essentially that frequency was pretty low, which also I think played some role why users learned complaining, but I ask, specifically actually ask a colleague from support and he didn't really mention that user complaining. I did once a quite thorough analysis of all the reviews and tried to find what users complain about, not really from that particular reason, but I was more interested what user like, what I didn't like, and I actually haven't really seen any specific negative about having paywall a lot of times displayed, et cetera. So, yeah, so actually it was a good thing for us.
David Barnard:
It's surprising sometimes when people are getting something for free, they tolerate more than we assume sometimes. I know a lot of games and even some regular apps have full screen interstitial ads. Duolingo's actually a great example of this. They do it very tastefully, but they have a full screen, unskippable takeover ad after a lesson. So I think people do intuitively get the exchange of value there of, oh, I'm using this completely for free. And I think you can get away with it more if your freemium tier is very generous, like you were saying, and maybe that's the key is if your freemium tier sucks, one, you're probably not going to get much retention anyway, so not many people are going to see the app open paywalls, and then two, they feel like they're pulling one over on you or getting something. They feel like they're getting a lot of value and so when they see that paywall, it's probably less offensive in those situations, but fascinating how well that worked.
Michal Parizek:
Hundred percent, yeah. I really encourage everyone to essentially just try that. Just say, test it, A/B test it, just measure it, and I think most of that you will be surprised that the users will not react the way how you offer fear, they react. So I definitely would recommend just trying things out.
David Barnard:
The last thing I wanted to touch on was experiment velocity. Y'all did a lot of tests. How do you keep up with that? How do you isolate variables? How do you think about testing velocity?
Michal Parizek:
Oh, yeah, it was something super important. I actually think that it's the number one growth asset as everyone who actually wants to optimize paywalls and pricing. If you have good research, good prioritization, if velocity is, I think, it's even maybe more important because there's more shots you take. There's essentially a higher chance you'll win, there's a shorter feedback loop, better learning cycle. So it was a key. And yeah, I did a lot of experiments, and I think maybe the number one thing which allowed me to do that was having a third-party paywall platform allowed me to being autonomous and iterate fast, and really shorten the cycle of actually coming up with the experiment, developing it, launching it from months or weeks to essentially day. So it was so important that I think without it I would definitely wouldn't be able to get that amount of experiments live in that short time.
David Barnard:
That makes a ton of sense and Mojo is a pretty big app, and y'all were doing a ton of user acquisition, so you did have a lot of users coming in to be able to do the test on, but did you have a particular testing velocity? Would you be launching it on a weekly cadence or even three days and then wait for the data to make a final decision, but while you're already kicked off another test?
Michal Parizek:
It was more weekly or bi-weekly cadence. But what was also important to say that we mostly run experiments. We typically broke down the audience, the new users, to three buckets according to geo. It was like a US, or our US, Australia, Canada, English, US English bucket. Then there was the Europeans and then there was a Latin America. And we typically had these streams of tests for each, the specific geo bucket. And we've run tests in all of them free in parallel. And each test lasted usually one or two weeks because in those three geo, it was these three key geo segments we had, and we were able to get the statistical significant results usually in those one or two weeks for each of those buckets. So we were able to conduct experiments there.
David Barnard:
I think a great way to summarize this whole episode is you should be testing a lot. If you're not, you're leaving revenue on the table. Again, it's just such a clear example that you were able to increase average revenue per user by 60% in five months. That is such a huge win for the company ability to acquire new users and then over the long haul building that subscriber base over time. So such a fascinating lesson and thanks for sharing your insights today.
Michal Parizek:
Thanks. Thanks again for having you, David. It was really a pleasure to talk with you.
David Barnard:
Anything else you wanted to share as we wrapped up?
Michal Parizek:
I encourage folks, if you want to learn more about your paywall testing and monetization, just follow me on LinkedIn. I tend to share some of the advices and experience there, so hopefully you can learn something more.
David Barnard:
Awesome. Well, we'll put a link to that in the show notes. Thank you so much, Michal, for joining me today. It was really fun.
Michal Parizek:
Thanks a lot, David.
David Barnard:
Thanks so much for listening. If you have a minute, please leave a review in your favorite podcast player. You can also stop by chat.subclub.com to join our private community.

