top of page

We are a $10,000 ARR company!😝

Updated: Jan 30



Welcome back to my weekly newsletter. This is going to be a series of blogs covering how we came up with the idea of Calm Sleep, scaled it to over a million users in less than a year with less than a $1,000 total investment. 🌱


I’m Akshay Pruthi, an entrepreneur who loves to build products from the ground up. Over the past 6 years, I have built multiple products from scratch and scaled them to millions of users. This is my attempt to share our most important lessons from building one of those apps: Alora (Android) and Alora (iOS) . 🤗


Note - Calm Sleep is the side hustle that I started with Ankur Warikoo, Anurag Dalia, and a team of 3.


Every week I will publish an article about a different problem we faced while building Calm Sleep. Last week, we discussed how we approached monetization.


This week, I am going to discuss the experiments we did to increase the paid subscribers.


In the last article, I listed multiple experiments that we planned. Before jumping right into it, I would like to take the opportunity to talk about how to build a robust culture of experimentation across all functions.


Often, we find ourselves lost deep within the jungle of features that we could introduce within the product. And a lot of our time is wasted proving that the feature is working. To avoid this, I’ve formulated a simple framework for building a culture of experimentation and improving your product by running multiple experiments.


🔧 Build a Hypothesis

I’ve often found myself getting excited about a new idea for a feature thinking - why wouldn’t a user want this? But over time, I’ve learned that this may not always be best for your product.

Building a strong hypothesis is a good starting point to convert an idea to a meaningful feature.


A hypothesis is a statement made with limited knowledge about a given situation and requires some validation to know if it is true or false. These would ideally help your team explore and find the best solution to a given problem. An example of this would be - changing the copy on a payment pop-up will improve conversion rates.


Hypotheses are often hunch-based, so it’s very important to support these hunches with data.


What should your hypothesis be?

  • Actionable

  • Measurable

  • Experimental


What your hypothesis shouldn’t be?

  • Vague

  • Ambiguous

  • Optimistic

  • Biased


A good example of a hypothesis is:


We believe that an increase of 30% in completed sign-ups

will be achieved for a user wanting to drop off during sign-up

will change his mind and go through to the app

If he were to be offered multiple sign-up options on the sign-up screen.


🧪 Run Lightweight Experiments

At this stage, you want your hypothesis to convert into meaningful actions, or experiments, as we call them. Choose experiments thoughtfully so that they don’t call for entirely new features, but maybe involve a few lines of code that could help you validate that the feature could potentially solve the problem at hand.


At Calm Sleep, we collaborated on multiple experiments that we wanted to test. The guidelines were that an experiment

  • Should be built on a hypothesis

  • Should have a clear answer for - ‘What will you achieve after running this experiment?’

  • Shouldn’t take more than a day to code


For example, we wanted to test if our users on Calm Sleep would like to talk about how to sleep better with other users. We hypothesized that people always want to feel that they are not alone in this and would come forward to share tips with a community.


So instead of launching an entirely new feature, we decided to launch an invite-only banner to get a sense of how this concept is perceived by users.

It took us a few hours to code and release this. When we launched the banner, our invite list filled up fully within 4 hours! This was quick validation for whether we should even consider launching this feature.


🔑 Define Success

Experiments can inform you, but also create false confidence based on misinterpreted results. There is convenience in being biased towards results when you see an uptick in numbers. But are these the numbers you should be tracking?

For instance, when the number of installs goes up, your daily active count automatically increases. But is that the right way to see your company's growth? Maybe what you need to observe is the DAU per new install ratio and see whether that increases with time.


So it’s important to define how you measure success with each experiment you do.


Like in the community sign-up example, our success metric was - how many people signed up for early access in how much time? That gave us a hint about the eagerness of users to join the program.


Jumping over to the experiments we tried out.


Experiment 1:


Hypothesis

We believe that an increase in the click-through rate (CTR) up to 10% will be achieved if the user who is not motivated enough to tap the current banner, will see a different copy that evokes emotion to tap and eventually purchase.


Experiment definition

Change the copy. Evoke emotions. Give confidence.


Existing copy

Loving Calm Sleep?

Support us by sponsoring us a meal 2577+ supporters


Call to action - “Support”


Proposed copy

If you slept better, then say thank you to our sleep experts.

2,577+ people did so yesterday.


Call to action: Say thanks


Success definition

We will consider this to be a successful experiment if we can achieve a click-through rate of 10% and payment successful by another 2%.


📉 Result

We observed the CTR for the banner with the ‘Say Thanks’ button nearly doubled, from 5.2% CTR to 10.4% CTR with the new copy but it did not impact conversions for successful final ‘payment successful’. After tapping the ‘Say Thanks’ button, most users dropped off without selecting a plan. So, while changing the copy got us to the target CTR results, it could not improve conversion.


Experiment 2:


Hypothesis

We believe that we’re not aware of the intent of the users to ever pay or not. So we need better insights into what's stopping them from paying.


Experiment definition

Launch a feedback pop-up asking users what they were comfortable with?


Copy

Right now, the Calm Sleep app is 100% free. If we had to sustain ourselves financially and needed your help, how could you help us?

  1. Will pay later

  2. Don’t intend to ever pay

  3. Need some extra features for me to pay


Result

~32% of users who responded to the survey indicated that they would not mind paying at some point in their life.


We considered this our ray of hope and decided to work on more experiments to drive value to our users. So, we decided to go a bit deeper into this🤪


Below is a matrix of canceled subscription behavior patterns across new and old users for different pricing models.


Surprisingly, we saw that the cancel rate was higher among old users as compared to new users. 😯

And the lowest payment option had the lowest cancel rate, which was quite obvious.


We looked further into the events around which these users opt-in for payments.

We found that our users could broadly be classified as

  1. Users who paid before they were activated (a user is activated when they complete one sleep sound)

  2. Users who paid and were activated on the same day

  3. Users who first got activated and eventually paid

  4. Users who paid and got activated on day zero


😮And the results


  1. 45% of payments were made on the day of activation, out of which most fell on Day 0.

  2. 28% of paid users pay first and then get activated


Experiment takeaways

  1. We started asking ourselves whether it makes sense to ask for payments later rather than earlier. When we looked at how our competitors were doing it, most of them ask for payments upfront at the time of onboarding. We wondered if users are okay with making payments on Day 0 because they’re used to it.

  2. Should we experiment with other price points? $4, $5, $6?

  3. From our experiment list, let’s aim at launching a payment experiment centered around the artists. “Create sleep expert personas for sounds/meditations/stories and when they are playing, have a big “thank XXX” CTA on the player”

I’ll discuss these in detail in my next article. By this time, we had about 500 active paid subscriptions distributed between price points of $.99, $1.99, and $2.99 :D



If you like it, Buy me a coffee here

I publish weekly blogs where I share everything I learn.
Subscribe below to stay updated! 👇🏽

Thanks for submitting!

bottom of page