Posts (17)

Jul 1, 2016 · Early Adopters - What? Who? Why?

One of the exciting things that occurred during the last couple of weeks is we identified the traits of one of our early adopters. An early adopter is the person who will buy your product first.  Every product has those who will be first in line to try.  These are the folks who can help build your product, and those who can help your team center their efforts.

Our growth model for OnPar starts with paid acquisition and relies heavily on viral referrals.  Our early adopter target market is very social.  They talk about competition, sharing with their friends and working as a team to solve problems.  They’re very interested in working together, collaborating, sharing.  However, OnPar as it currently stands isn’t desirable enough for them to actually recommend it or share it with a friend.  They would play it, but they wouldn’t then share it.

Now that we have seen who we believe to be our early adopter, we created a persona around that individual taking the qualities of the person who will buy it first and sculpting the image of a name individual around those.  This helps are team focus around someone who seems very real.  Our early adopter has taken the shape of a person named Scott.

scott

It’s not that Scott himself is the early adopter, it’s the traits that he possesses that make him so.  His age group, the fact that he’s confident in the process of clinical reasoning, his willingness to use technology, his social behavior.  The fact that he’s a medical student or resident may be less important — a nurse practitioner or physician assistant may also possess these traits that align with our identified early adopter.

The important attributes that make up the person — the early adopter — are their interests, their needs, that they have the problem you say you can solve, and their behaviors.  If your product can suit that whole combination, then that person will be willing to use it.

Scott is built upon the common attributes we found in customers who said they would pay.  He could be any age, any sex, any level of learner — but we found attributes that they shared.  His picture, name and other specifics helps the team identify him.  We are able to create someone for whom we could build features and advance our product.  Personas keep everyone on the same page. A high level thing to help the team focus and concentrate.

This early adopter identification led us to center on the things they said are valuable enough for them to share it with their friends.  We started to work on OnPar’s feedback.  This will encourage the  following step which will most likely be related around common social features like competing or comparing your score.

Early adopters are valuable for a few reasons.

  • First, and most importantly, the early adopter is the one who will validate that your product has value and will probably offer payment and their time.
  • They’re also very flexible.  They will learn with you, they will let the product be imperfect. They’re very forgiving.  If you go to market and have 10,000 users but not your early adopter, they may never come back. An early adopter will stay with you through the entire process and watch it grow and provide feedback. They’ll give you all kinds of insights they want.  
  • Lastly, identifying your early adopter lets you concentrate more instead of trying to design for everybody.  You can design for a very small group and learn from that iterating from there.  Alternatively, having a whole looming sense of competing needs from multiple groups can be overwhelming and distract you from moving forward.  Honing in on what one group needs — that group where people said they would pay — should be prioritized.  That’s the person who really needs your product.

Feature builds will focus around Scott.  We believe Scott is our early adopter and are conducting customer development with others like Scott to push forward with this. We’re showing them new features to see if they would start to share with their friends.  If we’re wrong, then we may find some of our other interested groups might be more fitting.  

Consider developing personas for your next development project – whether it be developing an event or a tool.  Mayo Clinic has personas that have been built on employee role types and the Center for Innovation has some which reflect a diverse patient population.  These have been very helpful for many projects or build your own.  Remember that these personas don’t reflect all of your user types, but instead help you and your team focus your attentions on those early adopters.

 

Jul 1, 2016 · Mobile Optimization

OnPar Mobile!

In the our previous postwe talked about some of the ways we measured Acquisition and Activation of our users. Perhaps the most obvious drop-off we noticed was the mobile-gate in the app–users visiting on a mobile device could not play the game. Up to 40% of our users seemed to be visiting on a mobile device, then dropping off before playing a case.

This week we set out to primarily adapt the app to be mobile-friendly, in order to bring up that conversion to Activation percentage. After our adaptation for mobile,we saw a 230% increase in overall usage, a 403% increase on mobile, and 346% increase in tablet usage.

Check it out for yourself: OnPar  (Send us your feedback!)

Screen-Shot-2016-06-02-at-1.10.15-PM

 

Design Critique

We use design critiques to gather professional perspective on our designs. This week, we did a design critique with several thoughtbot designers and the Mayo team for our new mobile design. The process works as follows:

  1. We printed out relevant screens and taped them up to the wall
  2. We invited other designers in the company to come by for a 30-minute session
  3. We presented what type of information we are seeking (in this case it was on visual design, content clarity, and flow)
  4. Then the designer use dots with their initials on them in a silent session (~5 minutes) to mark things they want to comment about
  5. Then we go dot-by-dot and ask for the comments!

IMG_4199-exported

One of the keys to this method is that the presenter is not allowed to try to explain or defend anything commented on. This creates an open atmosphere where people feel free to provide critical feedback. Also, it allows for many topics to be covered in a short amount of time by many people. If the presenter had to explain everything, the group would likely fixate on one or two issues and only have time to discuss those. In this case, we received over two dozen individual pieces of feedback on everything from the color, to the layout, to the language, etc.

 

Cases for Pathology

We wanted to test the OnPar concept outside of Primary Care specialty that used more images in their daily workflow. So we worked with a Physician to craft a Pathology speciality case. In order to do this, we needed the application to support images in the “Patient Info” and card answers sections of the app. Here is one of the stains shown in an answer card:

stain-shot

In the coming weeks, we will be encouraging Pathology educators and learners to sign up and give it a shot! Our experiment asserts that these students will convert at or better than our baseline measurement from the previous weeks.

Social Group Interview at Bellevue Hospital, NYC

We wanted to do some interviews with educators and learners outside of Mayo to keep our minds grounded in the larger marketplace. Through personal connections, we ended up presenting at a NYU educators innovation monthly meeting. It was not exactly the context of 1-on-1 customer interviews we had done in the past: we had 1 hour, and 16 participants. We were in for a lively session!

IMG_4211-e1464888635824
We did not want it to devolve into a presentation, or pitch. We also didn’t want voices to be dominated or remain silent in the face of so many people. So, we broke the group into 4, and each of us handled one group. While 1-on-4 is still difficult to manage and take notes, we think this worked out great. We were able to witness these groups interact with OnPar, and each other simultaneously. It gave us a lot of insight in how the app might fight socially among many users.

Because our notes covered all manner of topic, we decided to conduct an Affinity Synthesis to organize our notes:

IMG_0263-exported

Usability Testing with Learners

In addition to the group-setting, we also sat down with five individuals either in medical school at NYU, currently in residency, or post-residency. With our recent mobile updates (that also adjusted the interface for desktop users) we wanted to watch people use OnPar to test the efficacy of the interface. We discovered many user interface breakdowns, small and large. For example, the game board on mobile behaved weirdly when the user tapped and held. This turned out to be a technical issue with mobile Safari and Chrome, and we resolved it quickly.

At a higher level, we got a lot of confusing and sometimes negative feedback regarding the wording used in cases and cards. Originally, the visual size of the small square card severely limited our character count to well-below the size of a Tweet. Now, the new design can hold longer and more-detailed descriptions. We are now planning to work with a medical writer to improve the cases within OnPar.

 

Jul 1, 2016 · Experiments, Metrics and Learning

Background

In our last post about lean customer development, we discussed how the Mayo Education Innovation Lab could develop a process to design and build a business for one of the Lab’s first prototypes, OnPar. To this end, we prioritized the three early risks as:

  • Could we sustainably get cases to distribute?
  • How much would people use it?
  • How would people find out about OnPar?

If we could identify and rudimentarily measure how people are interacting with OnPar on these three points, we could help zero-in on the right balance to achieve product/market fit.

Metrics the Lean Startup Way

Originally popularized by Dave McClure’s lightening talk at Ignite Seattle, AARRR is an acronym that separates distinct phases of your customer lifecycle.  Often referred to as the Pirate Metrics (get it?  AARRR?), the phases are defined as follows:

  • Acquisition: users come to the site from various channels
  • Activation: users enjoy 1st visit: “happy” user experience
  • Retention: users come back, visit site multiple times
  • Referral: users like product enough to refer others
  • Revenue: users conduct some monetization behavior

Measuring Acquisition & Activation

Building on our paid-marketing experiment our previous blog post, we wanted to also to run experiments to measure a possibly significant source for new users: referrals from educators. In our customer interviews over the weeks, we see a lot of recommendations being made for various formal and informal apps, websites, books, journals, and other materials from educators to learners. We wanted to figure out roughly: what percentage of people would sign up when presented to OnPar?

acquisition

To measure this, we developed a custom-link utility so we could send different links to different people to measure whether someone referred by a specific educator land at the site, sign up and create an account.  If they land at the site, we consider that Acquisition.  If they sign up and create an account, that is Activation. In two weeks, we invited Educators, Program Directors, and Practicing Clinicians to invite some of their learners to OnPar. In 30 days, we got at least 90 people to land on the invitation links, and 30% of those people completed their first case.

We have two major suspicions for the large drop-off after landing:

  1. The website is not-mobile friendly after you create your account, and you are not allowed to see a case
  2. The signup itself is offered up-front as the first thing they see

Over the last month, and from our first OnPar release, we have seen many visitors arriving on mobile devices (phones and tablets). We decided to go ahead and make the game mobile-friendly for these users, so they would not immediately have to leave the application.

Other ideas for improving this step, are in relation to the sign up. More advanced experiments could involve us moving the sign up until after the visitor completes a case. Right now, the sign up is the first screen they see after landing, and we suspect that it is unclear what and why the user would sign up (other than on a leap of faith). This could be improved to aid in the acquisition rate.

Measuring Retention

Because of the case-based nature of OnPar, we wanted to measure and begin to experiment on what it will take people to return to the app and use it regularly. We decided to run an experiment to issue a new case every week, and email announcements to our subscribers.

pageviews-over-month

This graph shows relative bursts of pageviews (Y-axis is hidden, but it is “pageviews”). What we saw was a bit expected in broad strokes:

  • A burst of activity when email went out
  • A traffic falloff down to near zero after a couple days

This is exciting because it demonstrates the engagement and retention that we can build on. As we improve our analytics capability, we will be able to measure more specific information (such as, cases completed).

Measuring Referral


Screen-Shot-2016-05-24-at-12.01.50-PMAnother possible source from new learners is an opportunity for learners to tell each other about OnPar. In order to begin to measure a baseline around this and possibly experiments to improve, we put in a button (formally called a call-to-action) after the learner completes a case. The simple widget will allow learners to send an email containing a OnPar‘s URL to their colleagues and friends. Of course, we’re going to setting up tracking which will enable us to measure how many visitors come to OnPar in this way. In the next blog post, we will update the results of this and other referral experiments we’re conducting.

Case Creation

Since OnPar is all about exposing people to real-life cases, we wanted to explore how we could go about acquiring cases. For starters, we asked our personal and professional networks for people who could create cases from their experience. While the outpouring of help and engagement from the Mayo community was exciting and motivated, we can’t count on being able to tap our networks if the app scales beyond what we have now.

We set out to try a couple ways to get cases from educators:

  • Offer credit / reputation to the educator for making a case for OnPar
  • Offer financial compensation for the time to create a case
  • Improve the case-creation interface to make it as easy as possible

While we did see lots of desire to help create cases for OnPar, the reality set in that the individuals we need to make cases are often very busy and have many commitments outside their normal work. We have recently started experiments to test what an appropriate level of compensation to a physician creating a case for OnPar.

Although we think that a better online case-creation interface could help the issue later, we are de-prioritizing it and taking the work of using the interface onto our own shoulders. The technical and design costs to building it can be delayed until later.

Cases for Different Specialties

As part of our stated goal to reach 25% of physicians in the entire medical community, we knew we should be careful not to fixate on any one particular group or specialty too soon. To this end, we wanted to make OnPar usable for other medical specialties, especially for ones that use more imagery such as pathology, and radiology.

We continue build on our minimally viable product features  – including to support images — and hope to release this feature in our first case soon.

Please comment below or directly to us through email.  We value your suggestions, thoughts, ideas, critiques.  Your feedback is truly critical to our success.

Jul 1, 2016 · Education Innovation - We Begin Again

As we re-ignite our efforts to test out how the Mayo Education Innovation Lab could build new products, we recently kicked-off a second phase of our process. The first phase of the Innovation Lab focused on a Design Thinking process to create several prototypes, one of which showed promise. During this second phase, we started to design and execute a process to validate and build a business around this or a related product. We engaged with a product design and development studio called thoughtbot to partner with us during this phase. Abhi Bikkani and Jeannie Poterucha Carter are continuing on the project from the Mayo side, as we further developer our innovation toolbox.

We’ve had a lot of great feedback and help from many people throughout Mayo in various disciplines — for this we’re very grateful!

OnPar Prototype Success

Building on the positive signs witnessed in the “Cases” prototype, we had developed and launched OnPar (http://getonpar.herokuapp.com/), a game prototype for healthcare professionals to engage and learn with real-life cases. We immediately witnessed a couple promising signs:

  • The gameplay dynamic elicited excitement and seemed to be a genuinely fun way to learn with real-life cases
  • Visitors shared the link to their colleagues, virally increasing the traffic beyond our initial messaging campaign

We believe we addressed a compelling topic in an interesting way, and it resonated with our community. In order to capitalize on these indicators of traction, we wanted to take it to the next level and view the idea through the lens of a product and a business.

Goals

Building on our learnings from the previous ideation, prototyping, and testing cycle done with Neo, we began a new phase to determine the viability for a potential product similar to our prototype. We have several goals for the phase that aim to deliver benefit to Mayo, the health practitioner community at-large, and our innovation lab process arsenal:

  • Use Customer Development and Lean principles to discover a product-market fit for OnPar
  • Further develop Mayo’s toolbox to include Lean validation methods to work complimentary to our user-centered discovery process

We kicked-off again in mid-April at our Gonda building location to set the course for the project.

Introducing the Customer Development Process

In the first phase, we used a Design Thinking process to uncover unknown pain points, desires, contexts, and life goals of our research participants. That led us to some amazing ideas and product visions, resulting in our OnPar game prototype.

Armed with ideas grounded in reliable data, we transitioned into building a business with what is known as Customer Development. Although the two approaches seem similar, they differ in subtle ways.

Steve Blank, author of The Startup Owner’s Handbook, and a leading proponent of Customer Development, sums up the differences:

  • Moving with speed, speed and did I say speed?
  • Starting with a series of core hypotheses – what the product is, what problem the product solves, and who will use/pay for it
  • Finding “product/market fit” where the first variable is the customer, not the product
  • Pursuing potential customers outside the building to test your hypotheses
  • Trading off certainty for speed and tempo using “Good enough decision making“
  • Rapidly building minimum viable products for learning
  • Assuming your hypotheses will be wrong so be ready for rapid iterations and pivots

– Steve Blank (https://steveblank.com/2014/07/30/driving-corporate-innovation-design-thinking-customer-development/)

Working together over the coming weeks, we will assert hypotheses and get them validated or invalidated by putting experiments in front of our potential customers.

Business Model Canvas

To get started, we created several iterations of a Business Model Canvas in individual and group settings. A Canvas is a template to organize our vision for the business and how it offers values to the problems of the customer. It conspicuously focuses on the customer’s problems and our value offerings, instead of a concrete product. This enabled us to view how our value propositions connect to the problems people have in real life.

These are just a few images of the individual canvases we did while brainstorming.

File_003 IMG_8869 IMG_8874

 

 

We worked together and consolidated down to two representative Canvases:

1.  Knowledge Gaps

MCOLi-canvas-knowledge-gaps

A model addressing the needs “Educators” and “Learners” have regarding knowledge gaps and exposure to a breadth of cases.

2.  Maintenance of Certification Credits

MCOLi-canvas-moc

A model addressing the need for practicing clinicians to earn MOC and Continuing education credits, and the potential to offer a unique way to obtain these credits.

(Find the free digital tool used to create these canvases at http://leanstack.com).

Qualitative In-Person Validation

The simplest way to validate a hypothesis is to speak with (potential) customers and put forward your ideas and measure their reaction. This typically involves sitting down with a customer and speaking with them for 30 minutes to one hour and presenting them with your hypotheses. We posed our problem statements and value propositions to the participants. All participants were educators in some capacity (Program Directors, Course Directors, Orientation Directors, Faculty, and practicing clinicians.)

A positive reaction from a participant during a conversation is not necessarily predictive of actual behavior, but a negative reaction could be an indicator of friction or conflict. By posing some of our ideas to the potential customers themselves, we were able to catch any large pitfalls we might have otherwise missed in a later experiment.

Once we are comfortable with our hypotheses’ gut-checks, we move onto more robust experiments which take more effort to construct.

Qualitative Signup Experiment for Learners

IMG_8882

To follow up on the exciting traction from the OnPar prototype release, we felt we needed further validation that learners would sign up for OnPar in the wild. So, we decided to create a Landing Page experiment by taking the following steps:

  1. Create a single-page publicly-accessible landing page presenting 3 values provided by OnPar
  2. Ask for an Email signup in exchange for access to OnPar.
  3. Create a Facebook ad targeted to a particular segment (in this case, we chose U.S. medical Residents at large) and drive a few hundred of them to the landing page.
  4. Measure the proportion of visitors to email signups.Hypothesis: if people are genuinely interested, they will feel comfortable enough to provide us with their email.
  5. We currently consider 25% a success, but without a baseline, we may adjust this definition of success (e-commerce industry is about 1.5%, but not very analogous to our experiment).

trello-residents-sign-up-expirament
We created an Experiment card in our Trello board for tracking its progress:

We chose the narrower potential Learner customer segment to make our task more straight-forward for ad targeting, but we are by-no-means focused just on this group.

Screen_Shot_2016-05-04_at_6.01.18_PM

After building the landing page with email signup, we used Facebook’s ad targeting utilities to identify a pool of approximately 15,000 Residents around the US.

Below is a portion of the landing page, with the call-to-action to provide an email. We are kicking off the campaign this weekend, and look forward to sharing the results in an upcoming blog post!

onparapp-lander-experiment

Assumptions Board

We also monitor our assumptions, so we can try to test them. Assumptions are often hard to spot, and it takes every team member to recognize when we appear to be taking an unsaid one for granted. So, when we discover one, we put it up on the board and subsequently attempt to form an experiment to validate or invalidate it.

We would add to this assumptions board on-the-fly as we were doing other activities, and identified one.

assumptions-1assumptions-2

Please comment and share your ideas, your leads and your enthusiasm!  We look forward to hearing from you!

 

Jul 1, 2016 · Innovation Lab Open House

Last Tuesday, Nov 17th, the innovation team held an open house in Rochester. Over 3 hours, we estimate roughly 200 people came through.  The energy was high. The discussions were great.  Below is a virtual tour of the content of the event for those who could not attend.open_house1

open_house2

INNOVATION LAB OPEN HOUSE TOUR

We have captured the wall displays from the open house, with some explanation for each.  Given the layout of the room, the flow actually went counter-clockwise, so when a photograph has multiple components, it is likely working right-to-left.

open_house3

Part 1: The purpose of the current 16 week project is to help Mayo Clinic decide whether it wants to invest in an ongoing innovation studio for the Education Shield.  Do these methods work for Mayo Clinic culture? Do the kinds of people Mayo Clinic might hire fit with the organizational DNA? Our general ethos is always to test an idea with a smaller amount of money before investing a large amount of money, and we are doing it with the innovation studio itself, not just the products it works on.

Another key part of our ethos is action over discussion.  We need to test out the innovation studio by doing the work, because that is the only way to get real data for a decision.

We also reminded people of the high-level goals for the innovation studio, set at the start of the project and which remain a useful north star for our decision making. The big three: we wanted to have a direct connection to patient outcomes through our educational efforts; we wanted whatever we created to impact 25% or more of healthcare professionals in the USA; we wanted whatever we created to be breakeven or better by year 3.

open_house4

Part 2 and 3: The team consists of two major parts: 1. the board, which provides accountability, strategic alignment, and resources; 2. the team, which chooses the ideas and does the actual execution.  The team is cross-functional, composed of design, engineering and strategy, and led by an entrepreneurial lead.  There is actually a third major component to the team not shown above, which is the expert advisors we surround ourselves with.  These people are an essential source of inspiration, reality-checking, and domain expertise.

On the left, you will see a rough process flow which describes the arc of the work over the first 8-9 weeks: we learned about Mayo Clinic and healthcare professionals, we ideated and chose an idea, we broke out our key assumptions and risks and tested the key ones, we built a “minimum viable product” and then synthesized our results.  The key point we made to people when showing this diagram is that, while the illustration looks linear, it is really a whole bunch of really tight loops.  At the early stage, we try to wrap everything within “learn, then build, then measure, then repeat” cycles.

open_house5

Part 4: Here are a few key ways we work and think. Action over meetings. Outcomes (results) over output (deliverables, features). Data-informed, not data-driven.  Scrappy doesn’t mean crappy. The items on that wall mean a lot to the team.

open_house6

Part 5: We kicked off the project in week 1 with an inception and a huge amount of customer development (1-on-1 qualitative research).  We had an early hypothesis that NP/PAs would be an important early customer of ours, and spoke to about 30 of them. However, we also have tried not to limit ourselves too tightly, talking to nurses, doctors, residents/fellows and more.  On the left of the photograph, you can see some of the “dump and sort” exercises we did to make sense of patterns we saw.

open_house7

Part 6: Another essential step during week 1 (any beyond) was to talk to as many experts as we could.  We visited CFI and the Sim Center. We dug into Dr. David Cook’s research into learning methods and explored the innovative work of Dr. David Farley and Dr. Farrell Lloyd. The list of people to thank was quite long, and we have really valued the generosity of many people at Mayo Clinic.

open_house8

Part 7: In weeks 2 and 3, we tried to get lots of ideas out of people’s heads. This was done with a mix of structured and unstructured time.  One of the structured exercises we like is called a “design studio” (or a “charrette”), in which the team picks a topic and then individuals sketch as many ideas as possible  can within 5 minutes. We then converge around and refine the best of those ideas.  Examples of topic areas were: “how could we take Ask Mayo Expert to the rest of the world?”, “How can we get around the scaling limitations of a physical Sim center?”

However, we also believe in unstructured time for creativity. Each member of the team was encouraged to explore areas of interest and bring back their research and ideas to the rest of the team.

open_house9

The last part of ideation was to filter our ideas. Most ideas dropped to the cutting room floor, as you would expect, but we had 6 that survived.  We had 9 key filters for prioritizing ideas, which range from our ability to test it in 4 weeks (important for the purposes of this 16-week trial), to the direct connection to our north star principles.

Out of those 6, we actually ended up in a tug of war between two ideas, and so to tie-break, we challenged the team to pick either idea and “stretch” it, to take it to the next level.  Out of this came the idea we ended up choosing: Cases.

Cases was (is) a mobile video case learning and discussion platform.  It allowed people to create an invite-only peer learning group, and then share short video cases and short video responses to those cases, all on their smartphones.

open_house10

We sketched out how people learn from cases today, and what a new approach might be utilizing smartphones.

The next essential step was to break out our assumptions around the idea: not just what the product was, but who it was for, what their goals were, how we acquired customers, how we made money, etc.

Then we asked a critical question: what assumptions do we have, that if proven wrong, would cause failure?  Out of these assumptions we can spot our big risks: things that feel highly impactful and also highly uncertain.  From here, we started sketching experiments.

At this point, we are at the end of week 4.

open_house11

Part 8: The team ran a number of experiments around Cases, some big, some very small. For example, we didn’t know whether participants would be able to easily create short video cases. For this, we ambushed a few (targeted) people in the halls of Mayo Clinic and asked them to try to record an interesting 1 to 2 minute case, right there on the spot. And people were able to do it!

During this time, we also did a lot of market and competitive research.  We learned that there was a lot of case-sharing activity on the Internet already, but mostly in the form of text discussion boards, or “virtual patient” engines.

No one seemed to be exploring video or private peer groups. Was this because it was a bad idea, or because it hadn’t been considered?

open_house12

Part 9: One of the concepts we wanted to get across is what we refer to as the “truth curve,” highlighted in Giff Constable’s book Talking to Humans. The essential point is that you only get indisputable proof about an idea once you have a live product in the market — either people are using it, or buying it, or they are not.  However, you should not wait until that point. You can gather insights and data far earlier, but it requires using your judgement to interpret results.

open_house13

Part 10: We believed that we could very quickly create a working prototype of Cases, and truly test it out in people’s hands.  Eric Ries coined a phrase “minimum viable product,” and our interpretation of what MVP means is: 1. the smallest thing you can make, 2. that you hope is usable, desirable and valuable, 3. and which feels like a product to the user (even if it isn’t real behind the scenes).

open_house14

Part 11: One of things our designer did when designer the Cases MVP was map out the “user journey” in terms of people goals and possible actions that mapped to those goals.  However, the important thing to note is that we did not attempt to implement all of these features or view this as a list of “requirements.”  Rather, we cherry picked the most essential elements that felt minimally required.

open_house15

Part 12: In early experiments, we often fake a product and run the “back end” manually. These are called “wizard of oz” experiments for obvious reasons!  However, for Cases, we didn’t need to fake the back end.  Using a combination of custom software code, open source frameworks, and cloud services for infrastructure, we were able to make a fully functional version of the product in under a week.

At this point we were around the end of week 6. During that week, we had also recruited our initial testing groups for the product.

open_house16

Part 13: When we run an MVP experiment, we have to be willing to watch our experiment fail, but that does not mean we want it to fail, or that we are willing to let it fail for silly reasons.  We want to test VALUE, not usability!

When we first put Cases in people’s hands, we hit some immediate usability issues (as one usually does).  Our first onboarding flow was confusing, so we fixed that in about an hour and pushed a new version live. We also had a problem where people were opening up their email invitations on their desktops, but they could not really run the application on their desktop.  So we quickly added a feature that let them SMS a link to the application from their desktop to their phone. That seemed to fix the problem.

open_house17

Part 14: We went into our MVP experiment with some quantitative pass/fail targets. We instrumented the product to ensure that we could track how it was being used, and if we were hitting our targets.

In the photograph above, you can see the results we were seeing from each team running with the product.  The bulk of the data came from weeks 7 and 8.

We had also taken the MVP out to 9 other healthcare institutions outside of Mayo Clinic. While we generated interest, we did not get any true takers, which was a bad early sign in terms of urgency of problem and our initial value proposition.

open_house18

Part 15: In week 9, we synthesized our results.  Interestingly, we kept on having new groups hear about Cases and ask to try it out.  We are currently watching how they are using the platform.

However, our overall conclusion from qualitative and quantitative data is that video responses provided too high a barrier to entry for users. Video did not fit into their daily habits and was too public for many. However, the appetite to capture of the value of case discussions appeared to be high amongst groups and is worth exploring further.

open_house19

Part 16: For the purposes of this 16-week trial project, we are not going to iterate Cases, even though there are interesting directions to take it, but instead are going to explore a new new idea.  We asked attendees of the open house to write down *their* ideas for where we might go next. Above is what that board looked like at the end of the day.  Lots of food for thought for our team!

open_house20

Part 17: We also asked attendees to vote on a whiteboard as they exited the open house. We asked two questions: 1. do you like how we are approaching innovation? 2. Do you think Mayo Clinic should invest in education innovation?

The signal was clear that people thought Mayo Clinic should invest in education.  It was largely positive, but not universally so, towards our approach as well.  Frankly, we really appreciate that honesty.  We know we are not perfect, but our entire approach is to do our best, learn as we go, continually improve, and be transparent as we do so.

That concluded our open house.  We hope you have enjoyed this post-even walk through, and as always, send comments our way.

The last part of last week (our week 10) was focused on choosing our next problem space. You’ll hear more about that next week!

Jul 1, 2016 · Week 13 - New Prototype - On Par

It’s the New Year and the prototyping phase of our trial run is coming to a close in 3 weeks! We’re determined to make the most of them!

Those of you who have been following along know that we hit the reset button right before the holidays. We had put 2 weeks of effort into exploring 360 degree video as a new learning medium. The user experience was not strong enough, and the fit was not great with the team. We decided to kill the effort and work on something new.

As Tim Gunn of Project Runway like to say, it was a “make it work” moment.

13_1

We flew the extented team to Rochester for several days. We started out by reviewing and synthesizing everything we have learned so far in this project. In a mind-mapping exercise, we focused on the difference between amateur and expert learners. We knew that clinical reasoning was hard to teach. We believed that, in order to build on AME, we needed to get to that “gisty” or “system 2” thinking that emphasized pattern recognition.

13_2_gist

As a team, we did a design studio session where our constraint was to teach gist thinking. We then went to refine our ideas, and looked at games like Lumosity and Elevate. We were interested in how doctors mentally sort important versus unimportant data, and did some research on mental agility and cognitive processes. Dasami modeled out how people think through information in a clinical setting and started thinking about correlaries to games like solitaire. We spoke to our advisors Dr. David Cook and Dr. Farrell Lloyd about their research on system 1 versus system 2 (Dr. Cook), or verbatim vs gist (Dr. Lloyd) ways of thinking.

That was our Tuesday. By the end of the day, abstract thoughts had turned into a game we call On Par. It is a game that challenges you to perform efficient, diagnostic pattern recognition.

For an educational game to work, it needs to be both educational AND fun. That can be a hard bar to hit. We needed to see if we were on the right direction, so we jumped right into paper testing.

We sat down with Dr. Lloyd and Jane Linderbaum and fed a real patient case into our initial ideas for a game system. We build a first version of the game with index cards, paper and image print-outs (which you can see below).

13_3

We then grabbed as many doctors as we could and had them start playing. As we watched people play, we were able to see what was working and adjust the rules system accordingly.

We need to create something with a simple rules system, effective game dynamics, the right level of intellectual challenge, and a direct connection to our goals of using pattern recognition to teach system 2 thinking.

The advantage of paper testing is that it is extraordinarily easy to iterate. By the end of the second day, we were convinced that we were onto something. Doctors, NPs and PAs were having fun and wanted to play more. The subsequent weeks of testing have only reinforced that goal.

13_4

Paper testing with Dr. David Cook

Our current task is to translate the paper version of the game into a simple digital version (with a small enough feature set that we can build it in under 2 weeks) and design a compelling initial set of cases that can seed the game.

Next week, we’ll explain a little bit more about how the game works, and soon we hope to put a working version in your hands.

Jul 1, 2016 · Week 11 & 12 - Our Attempt at Prototype #2

The last two weeks for the Mayo Clinic Education Shield’s innovation team were a roller coaster. Within the span of those two weeks we chose an idea, ramped up to test it, and then promptly killed it. It was a shock to the system for some, but the right thing to do.

Back into the Fray

Towards the end of November, we held our open house in Rochester and then needed to immediately pick a second prototype. We do not advise jumping so quickly into a new idea without more research and exploration, but the budget constraints for this trial program gave us a limited time frame. Our goal was to make the most of it.

There were a number of missions that rose to the top of the list:

  1. Bringing more Real Life Experiences to Learning
  2. Overcoming cultural bias (and the impact that bias has on medical treatment)
  3. Team-based problem solving for patients with multiple conditions (inspired by Dr. Victor Montori)

At first, the team was very interested in #3. Mayo Clinic has a global reputation for excellent team-based care. Could it better teach this to the world? The implications were interesting, but after a day of analysis and talking to experts, we decided the complexities, dependencies, and blockers were too great for the 4 weeks we had for the prototype.

Bringing more Real Life Experiences to Learning

We shifted our attention to #1, the problem of bringing more real life learning experience opportunities. We had repeatedly heard that people loved the simulation experience. Sim center learning is well planned and highly interactive.  Teams felt that those experiences, although not available regularly, were the best experiences they’d ever had.  

Was there a chance to bridge the huge experiential gap between the immersion of physical simulation centers and the detachment of traditional online learning? Could we create something new that would deeply engage people in a scalable and affordable way?


12_warnerThe questions seemed worthwhile both from a “user need” and corporate cost savings perspective (and not just within Mayo Clinic). But how to solve? We viewed augmented reality as the most promising technology, but still several years away. Virtual reality had too many drawbacks in terms of creation costs and the quality of realism in the experience.
360 degree video, on the other hand, was a recently commercialized technology that seemed relatively inexpensive. In theory, practitioners could access an immersive 360 degree experience with a $5 Google cardboard viewer and a smartphone. The content itself could be created with relatively inexpensive GoPro cameras and off-the-shelf editing software.

We thus had a problem to chase, and a potential solution. We then split the team to tackle certain tasks and questions:

  • We designed a detailed experiment that would let us A/B test 360 degree video against traditional video
  • We reached out to doctors and NP/PAs who could help us try out the technology with some custom content
  • We researched the state of the technology, the viewers, cameras, editing tools, players, etc.
  • We turned to a virtual reality production company in San Francisco for advice
  • We sketched out the business model for the idea

To be honest, the team was fairly split between the optimists and skeptics on this technology. The real question was whether 360 degree video was a gimmick, or if it really would offer a more immersive experience.

The good news is that we had a ton of interest from medical educators who wanted to work with us to test the new technology (thank you all!). But we also quickly hit some bad news:

  • The level of interactivity was extremely limited. In theory, one could do eye tracking and even eye-triggered interactions. Unfortunately, the state of the technology implied an expensive custom software project.
  • The initial 360 video tests we did of a patient interactions were, let’s be honest, really banal. Without the ability to walk or zoom, or interact, the tech felt useless. Quite the opposite of what we were going for.
  • The 360 video players for the iPhone were really buggy, which might have restricted us to Android phones.
  • The cost estimates for producing a really good 360 degree experience, with multiple points of view and with interactivity, could be quite high – maybe as much as $75K per 8-10 minute video. Added to the custom software costs, the business model was looking like a non-starter.

Last Thursday morning, we held our weekly decision meeting where we recommend a “persevere, pivot, or kill” decision. In the first week, we had a number of worries and/or objections to 360 video, but we decided to continue investigation. By week two, however, we recommended a “kill” decision.

Silver Linings?

While it was frustrating to stop work on the idea and start over, that is exactly what an innovation team has to do — at least if they want to be capital efficient. We did discover two interesting things:

Dr. Farley’s Magical Surgery Session

While some of our experiments with 360 video were boring, there was one experiment that was quite the opposite: Dr. Farley doing a pre-surgery session with his students. In his session, there is a lot happening around the room. What would already be exciting in a normal video becomes more so when you have the power to swing your attention around at will to Dr. Farley, the chest x-rays, the students working around the patient, etc.

Take a look for yourself, using YouTube’s viewer controls in the top-left corner of the video to pan around.

farley_360

Hulu of Medical Education

We never viewed 360 video as an end, but rather a possible stepping stone in a wave of technologies that should make remote simulation increasingly compelling. However, creation of content is only the first step. We need an effective way to distribute and monetize it. Right now, it feels like the top teaching hospitals are competing against each other. What if we created a medical education “Hulu.” Imagine having a single storefront where a healthcare team could find and buy great content on any topic from Mayo Clinic, Johns Hopkins, Mass General, etc. It’s an intriguing concept. We haven’t had time to investigate whether it has been tried already.

Did We Make a Mistake?

Two weeks of work down… the disappointment of killing an idea… a looming deadline to get a working prototype done and no idea what we were going to do instead… not a great feeling.

So, did we make a mistake?

Giff Constable, the lead on the project from our innovation partner Neo, argues no. We went after a real problem. We explored a new technology with a mix of optimism and skepticism, and went into it with our eyes wide open. We put less than two weeks of work into it. We probably could have run a couple of small experiments faster, but that would have saved us only a few days.

This is the roller coaster of innovation work. If you are trying to push the envelope (and have a business model at the same time), you have to expect that 10% to 15% of things will work. The failures hurt, but it is better to face them.

We’re trying to remember that the purpose of this trial is to help Mayo Clinic make a decision on whether to fund a real innovation studio. The expectation was not that we would come up with a product win on our first few shots at goal. Of course, that is the rational view. The reality is that everyone really wants a win now! And that means we have to dust ourselves off and keep trying.

360 video was not going to be it, so we pulled the plug. We don’t have much time left, but we do think there is enough time to try one more thing.

Stay tuned!

Jul 1, 2016 · Cases App: Technical Architecture and Decisions

Keeping context in mind is crucial when making technical decisions for a new product. If the product is intended for production use by a large number of people, decisions for the technology stack need to take scalability, performance, robustness, and security into account. But when the intention is to launch quickly and test product’s viability, the preference shifts towards flexibility to change, speed of development, and faster deployment cycles. We kept these factors in mind when making decisions for the Cases application.

At this stage, metrics are crucially important, as your decisions are informed based on usage and signal from the user. It’s crucial for the system to gather and deliver data that’s directly relevant and useful to the product decisions you’ll make at the end of the experiment: Kill, Pivot, or Persevere.

We wanted to build a platform for Health Professionals where they can present patient diagnostic cases, and respond to these cases in a short video format. Our hypothesis was that we believed a video platform with interesting and short video cases will provide a good learning experience and viewers will respond to the cases with video.

One option was to use a platform such as YouTube or Vine and create a flow that will use these platforms. We decided against it because the lack of control meant that we won’t be able to gather the data we needed to make a decision.

We rather decided to go for a custom built Ruby on Rails application. Ruby on Rails is known as a web framework that’s optimized for developer productivity. Productivity and speed of development is crucial to this stage where as a product maker, you’re optimizing to get back usage data as fast as possible. Also, my familiarity with Rails as a Ruby as a developer was important because that meant I can be productive developing an application in that framework.

Since it’s custom built, we have a lot of flexibility options on the data we gather. For example, we were able to gather each time the user logs in, and how many cases were viewed by each user. Gathering these metrics would have been challenging when trying to test the product with existing third-party products.

We also stayed away from iOS and Android as a platform of development choice for testing the Cases application. Even though we wanted to build the application primarily for mobile use, we decided against developing on iOS for the following reasons:

  • Developing an application on Android and iOS frameworks takes up more time than web based applications.
  • The deployment and customer acquisition methods require more technical knowledge for the customer.
  • It takes considerable time for Apple to approve the app into App Store.

Technical Architecture

These are the following pieces of the architecture that served up the Cases application:

Ruby on Rails Application

The Ruby on Rails application served as the most important piece of the architecture. This handled the backend, data persistence, emails, and front-end pieces. The database engine we used was PostgreSQL, an open source software known for its advanced features and robustness. It’s also preferred by Heroku, our cloud application hosting provider. We also used Ratchet CSS Framework, which provides pre-built stylesheets to mimic the experience of mobile apps on iOS and Android.

Zencoder

Videos are complicated to handle, because of encoding and the vast number of different input and output formats. While there are open source solutions such as FFmpeg that that can handle encoding, we wanted to make sure this piece is handled with as much reliability as possible. Our experiment would have drastically failed if for some reason, the encoding stopped working. We opted for Zencoder as a third-party paid service for encoding videos and creating image thumbnails. They charge $40/month for 1000 minutes of encoding. They have a REST API and a nifty Ruby gem that handles encoding easily. We also get information such as the duration of the video and responses from Zencoder.

Sidekiq

To make sure we provide the best user experience possible, we needed a queue based background job processing service that can handle the dispatching and management of videos to Zencoder in a reliable way. Sidekiq fit this bill perfectly, it’s an efficient background job library for Ruby applications that’s built for concurrency and performance.

Amazon S3

Amazon S3 is a cloud storage solution by Amazon Web Services. Serving lots of large files reliably is what Amazon S3 is known for. We used it to upload the original files, save the encoded videos and image thumbnails. These files were later served by Carrierwave and Fog in the application.

Github

We used Github to manage code source. Github is a web application built for the Git source control system, and has lots of collaboration features. For a lot of front-end fixes and copy changes, we used their pull request feature to incorporate changes directly from the other team members. This freed up time to focus on coding backend pieces.

Heroku

As mentioned, we used Heroku as our hosting provider. Heroku is the easiest way to not only deploy Rails applications, but applications built on other frameworks and technologies as well. Since it was important to stay focused on testing the viability of the product, deferring the setup of a custom deployment server helped a lot. We were paying $14/month for each instance of the Cases App we launched. That was more than adequate for our needs. Also, deployment is just a one command away, which means no restarts, no configuration change, or sacrificing user experience by showing the maintenance page.

 —

It’s fascinating how using proven engineering frameworks, robust open source software, and inexpensive but reliable cloud services, teams can build and launch products so quickly and cheaply, and test them out as real products on real people.

This is a guest post from Rizwan Reza, Principal Engineer at Neo and the MCOLi project.