Welcome back to my weekly newsletter. This is going to be a series of blogs covering how we came up with the idea of Calm Sleep, scaled it to over a million users in less than a year with less than a $1,000 total investment. 🌱
I’m Akshay Pruthi, an entrepreneur who loves to build products from the ground up. Over the past 6 years, I have built multiple products from scratch and scaled them to millions of users. This is my attempt to share our most important lessons from building one of those apps: Calm Sleep. 🤗
Note - Calm Sleep is the side hustle that I started with Ankur Warikoo, Anurag Dalia, and a team of 3.
Every week I will publish an article about a different problem we faced while building Calm Sleep. Last week, we discussed how we approached monetization.
This week, I am going to discuss the experiments we did to increase the paid subscribers.
In the last article, I listed multiple experiments that we planned. Before jumping right into it, I would like to take the opportunity to talk about how to build a robust culture of experimentation across all functions.
Often, we find ourselves lost deep within the jungle of features that we could introduce within the product. And a lot of our time is wasted proving that the feature is working. To avoid this, I’ve formulated a simple framework for building a culture of experimentation and improving your product by running multiple experiments.
🔧 Build a Hypothesis
I’ve often found myself getting excited about a new idea for a feature thinking - why wouldn’t a user want this? But over time, I’ve learned that this may not always be best for your product.
Building a strong hypothesis is a good starting point to convert an idea to a meaningful feature.
A hypothesis is a statement made with limited knowledge about a given situation and requires some validation to know if it is true or false. These would ideally help your team explore and find the best solution to a given problem. An example of this would be - changing the copy on a payment pop-up will improve conversion rates.
Hypotheses are often hunch-based, so it’s very important to support these hunches with data.
What should your hypothesis be?
What your hypothesis shouldn’t be?
A good example of a hypothesis is:
We believe that an increase of 30% in completed sign-ups
will be achieved for a user wanting to drop off during sign-up
will change his mind and go through to the app
If he were to be offered multiple sign-up options on the sign-up screen.
🧪 Run Lightweight Experiments
At this stage, you want your hypothesis to convert into meaningful actions, or experiments, as we call them. Choose experiments thoughtfully so that they don’t call for entirely new features, but maybe involve a few lines of code that could help you validate that the feature could potentially solve the problem at hand.
At Calm Sleep, we collaborated on multiple experiments that we wanted to test. The guidelines were that an experiment
Should be built on a hypothesis
Should have a clear answer for - ‘What will you achieve after running this experiment?’
Shouldn’t take more than a day to code
For example, we wanted to test if our users on Calm Sleep would like to talk about how to sleep better with other users. We hypothesized that people always want to feel that they are not alone in this and would come forward to share tips with a community.
So instead of launching an entirely new feature, we decided to launch an invite-only banner to get a sense of how this concept is perceived by users.
It took us a few hours to code and release this. When we launched the banner, our invite list filled up fully within 4 hours! This was quick validation for whether we should even consider launching this feature.
🔑 Define Success
Experiments can inform you, but also create false confidence based on misinterpreted results. There is convenience in being biased towards results when you see an uptick in numbers. But are these the numbers you should be tracking?
For instance, when the number of installs goes up, your daily active count automatically increases. But is that the right way to see your company's growth? Maybe what you need to observe is the DAU per new install ratio and see whether that increases with time.
So it’s important to define how you measure success with each experiment you do.
Like in the community sign-up example, our success metric was - how many people signed up for early access in how much time? That gave us a hint about the eagerness of users to join the program.
Jumping over to the experiments we tried out.
We believe that an increase in the click-through rate (CTR) up to 10% will be achieved if the user who is not motivated enough to tap the current banner, will see a different copy that evokes emotion to tap and eventually purchase.
Change the copy. Evoke emotions. Give confidence.
Loving Calm Sleep?
Support us by sponsoring us a meal 2577+ supporters
Call to action - “Support”
If you slept better, then say thank you to our sleep experts.
2,577+ people did so yesterday.
Call to action: Say thanks
We will consider this to be a successful experiment if we can achieve a click-through rate of 10% and payment successful by another 2%.
We observed the CTR for the banner with the ‘Say Thanks’ button nearly doubled, from 5.2% CTR to 10.4% CTR with the new copy but it did not impact conversions for successful final ‘payment successful’. After tapping the ‘Say Thanks’ button, most users dropped off without selecting a plan. So, while changing the copy got us to the target CTR results, it could not improve conversion.
We believe that we’re not aware of the intent of the users to ever pay or not. So we need better insights into what's stopping them from paying.
Launch a feedback pop-up asking users what they were comfortable with?
Right now, the Calm Sleep app is 100% free. If we had to sustain ourselves financially and needed your help, how could you help us?
Will pay later
Don’t intend to ever pay
Need some extra features for me to pay
~32% of users who responded to the survey indicated that they would not mind paying at some point in their life.
We considered this our ray of hope and decided to work on more experiments to drive value to our users. So, we decided to go a bit deeper into this🤪
Below is a matrix of canceled subscription behavior patterns across new and old users for different pricing models.
Surprisingly, we saw that the cancel rate was higher among old users as compared to new users. 😯
And the lowest payment option had the lowest cancel rate, which was quite obvious.
We looked further into the events around which these users opt-in for payments.
We found that our users could broadly be classified as
Users who paid before they were activated (a user is activated when they complete one sleep sound)
Users who paid and were activated on the same day
Users who first got activated and eventually paid
Users who paid and got activated on day zero
😮And the results
45% of payments were made on the day of activation, out of which most fell on Day 0.
28% of paid users pay first and then get activated
We started asking ourselves whether it makes sense to ask for payments later rather than earlier. When we looked at how our competitors were doing it, most of them ask for payments upfront at the time of onboarding. We wondered if users are okay with making payments on Day 0 because they’re used to it.
Should we experiment with other price points? $4, $5, $6?
From our experiment list, let’s aim at launching a payment experiment centered around the artists. “Create sleep expert personas for sounds/meditations/stories and when they are playing, have a big “thank XXX” CTA on the player”
I’ll discuss these in detail in my next article. By this time, we had about 500 active paid subscriptions distributed between price points of $.99, $1.99, and $2.99 :D
If you like it, Buy me a coffee here