In our last post about lean customer development, we discussed how the Mayo Education Innovation Lab could develop a process to design and build a business for one of the Lab’s first prototypes, OnPar. To this end, we prioritized the three early risks as:
If we could identify and rudimentarily measure how people are interacting with OnPar on these three points, we could help zero-in on the right balance to achieve product/market fit.
Originally popularized by Dave McClure’s lightening talk at Ignite Seattle, AARRR is an acronym that separates distinct phases of your customer lifecycle. Often referred to as the Pirate Metrics (get it? AARRR?), the phases are defined as follows:
Building on our paid-marketing experiment our previous blog post, we wanted to also to run experiments to measure a possibly significant source for new users: referrals from educators. In our customer interviews over the weeks, we see a lot of recommendations being made for various formal and informal apps, websites, books, journals, and other materials from educators to learners. We wanted to figure out roughly: what percentage of people would sign up when presented to OnPar?
To measure this, we developed a custom-link utility so we could send different links to different people to measure whether someone referred by a specific educator land at the site, sign up and create an account. If they land at the site, we consider that Acquisition. If they sign up and create an account, that is Activation. In two weeks, we invited Educators, Program Directors, and Practicing Clinicians to invite some of their learners to OnPar. In 30 days, we got at least 90 people to land on the invitation links, and 30% of those people completed their first case.
We have two major suspicions for the large drop-off after landing:
Over the last month, and from our first OnPar release, we have seen many visitors arriving on mobile devices (phones and tablets). We decided to go ahead and make the game mobile-friendly for these users, so they would not immediately have to leave the application.
Other ideas for improving this step, are in relation to the sign up. More advanced experiments could involve us moving the sign up until after the visitor completes a case. Right now, the sign up is the first screen they see after landing, and we suspect that it is unclear what and why the user would sign up (other than on a leap of faith). This could be improved to aid in the acquisition rate.
Because of the case-based nature of OnPar, we wanted to measure and begin to experiment on what it will take people to return to the app and use it regularly. We decided to run an experiment to issue a new case every week, and email announcements to our subscribers.
This graph shows relative bursts of pageviews (Y-axis is hidden, but it is “pageviews”). What we saw was a bit expected in broad strokes:
This is exciting because it demonstrates the engagement and retention that we can build on. As we improve our analytics capability, we will be able to measure more specific information (such as, cases completed).
Another possible source from new learners is an opportunity for learners to tell each other about OnPar. In order to begin to measure a baseline around this and possibly experiments to improve, we put in a button (formally called a call-to-action) after the learner completes a case. The simple widget will allow learners to send an email containing a OnPar's URL to their colleagues and friends. Of course, we're going to setting up tracking which will enable us to measure how many visitors come to OnPar in this way. In the next blog post, we will update the results of this and other referral experiments we're conducting.
Since OnPar is all about exposing people to real-life cases, we wanted to explore how we could go about acquiring cases. For starters, we asked our personal and professional networks for people who could create cases from their experience. While the outpouring of help and engagement from the Mayo community was exciting and motivated, we can't count on being able to tap our networks if the app scales beyond what we have now.
We set out to try a couple ways to get cases from educators:
While we did see lots of desire to help create cases for OnPar, the reality set in that the individuals we need to make cases are often very busy and have many commitments outside their normal work. We have recently started experiments to test what an appropriate level of compensation to a physician creating a case for OnPar.
Although we think that a better online case-creation interface could help the issue later, we are de-prioritizing it and taking the work of using the interface onto our own shoulders. The technical and design costs to building it can be delayed until later.
As part of our stated goal to reach 25% of physicians in the entire medical community, we knew we should be careful not to fixate on any one particular group or specialty too soon. To this end, we wanted to make OnPar usable for other medical specialties, especially for ones that use more imagery such as pathology, and radiology.
We continue build on our minimally viable product features - including to support images -- and hope to release this feature in our first case soon.
Please comment below or directly to us through email. We value your suggestions, thoughts, ideas, critiques. Your feedback is truly critical to our success.