July 1st, 2016

Early Adopters - What? Who? Why?

By mary maryjpo

One of the exciting things that occurred during the last couple of weeks is we identified the traits of one of our early adopters. An early adopter is the person who will buy your product first.  Every product has those who will be first in line to try.  These are the folks who can help build your product, and those who can help your team center their efforts.

Our growth model for OnPar starts with paid acquisition and relies heavily on viral referrals.  Our early adopter target market is very social.  They talk about competition, sharing with their friends and working as a team to solve problems.  They’re very interested in working together, collaborating, sharing.  However, OnPar as it currently stands isn't desirable enough for them to actually recommend it or share it with a friend.  They would play it, but they wouldn’t then share it.

Now that we have seen who we believe to be our early adopter, we created a persona around that individual taking the qualities of the person who will buy it first and sculpting the image of a name individual around those.  This helps are team focus around someone who seems very real.  Our early adopter has taken the shape of a person named Scott.

scott

It’s not that Scott himself is the early adopter, it’s the traits that he possesses that make him so.  His age group, the fact that he’s confident in the process of clinical reasoning, his willingness to use technology, his social behavior.  The fact that he’s a medical student or resident may be less important -- a nurse practitioner or physician assistant may also possess these traits that align with our identified early adopter.

The important attributes that make up the person -- the early adopter -- are their interests, their needs, that they have the problem you say you can solve, and their behaviors.  If your product can suit that whole combination, then that person will be willing to use it.

Scott is built upon the common attributes we found in customers who said they would pay.  He could be any age, any sex, any level of learner -- but we found attributes that they shared.  His picture, name and other specifics helps the team identify him.  We are able to create someone for whom we could build features and advance our product.  Personas keep everyone on the same page. A high level thing to help the team focus and concentrate.

This early adopter identification led us to center on the things they said are valuable enough for them to share it with their friends.  We started to work on OnPar's feedback.  This will encourage the  following step which will most likely be related around common social features like competing or comparing your score.

Early adopters are valuable for a few reasons.

  • First, and most importantly, the early adopter is the one who will validate that your product has value and will probably offer payment and their time.
  • They’re also very flexible.  They will learn with you, they will let the product be imperfect. They’re very forgiving.  If you go to market and have 10,000 users but not your early adopter, they may never come back. An early adopter will stay with you through the entire process and watch it grow and provide feedback. They’ll give you all kinds of insights they want.  
  • Lastly, identifying your early adopter lets you concentrate more instead of trying to design for everybody.  You can design for a very small group and learn from that iterating from there.  Alternatively, having a whole looming sense of competing needs from multiple groups can be overwhelming and distract you from moving forward.  Honing in on what one group needs -- that group where people said they would pay -- should be prioritized.  That’s the person who really needs your product.

Feature builds will focus around Scott.  We believe Scott is our early adopter and are conducting customer development with others like Scott to push forward with this. We’re showing them new features to see if they would start to share with their friends.  If we’re wrong, then we may find some of our other interested groups might be more fitting.  

Consider developing personas for your next development project - whether it be developing an event or a tool.  Mayo Clinic has personas that have been built on employee role types and the Center for Innovation has some which reflect a diverse patient population.  These have been very helpful for many projects or build your own.  Remember that these personas don't reflect all of your user types, but instead help you and your team focus your attentions on those early adopters.

 

July 1st, 2016

Mobile Optimization

By mary maryjpo

OnPar Mobile!

In the our previous postwe talked about some of the ways we measured Acquisition and Activation of our users. Perhaps the most obvious drop-off we noticed was the mobile-gate in the app–users visiting on a mobile device could not play the game. Up to 40% of our users seemed to be visiting on a mobile device, then dropping off before playing a case.

This week we set out to primarily adapt the app to be mobile-friendly, in order to bring up that conversion to Activation percentage. After our adaptation for mobile,we saw a 230% increase in overall usage, a 403% increase on mobile, and 346% increase in tablet usage.

Check it out for yourself: OnPar  (Send us your feedback!)

Screen-Shot-2016-06-02-at-1.10.15-PM

 

Design Critique

We use design critiques to gather professional perspective on our designs. This week, we did a design critique with several thoughtbot designers and the Mayo team for our new mobile design. The process works as follows:

  1. We printed out relevant screens and taped them up to the wall
  2. We invited other designers in the company to come by for a 30-minute session
  3. We presented what type of information we are seeking (in this case it was on visual design, content clarity, and flow)
  4. Then the designer use dots with their initials on them in a silent session (~5 minutes) to mark things they want to comment about
  5. Then we go dot-by-dot and ask for the comments!

IMG_4199-exported

One of the keys to this method is that the presenter is not allowed to try to explain or defend anything commented on. This creates an open atmosphere where people feel free to provide critical feedback. Also, it allows for many topics to be covered in a short amount of time by many people. If the presenter had to explain everything, the group would likely fixate on one or two issues and only have time to discuss those. In this case, we received over two dozen individual pieces of feedback on everything from the color, to the layout, to the language, etc.

 

Cases for Pathology

We wanted to test the OnPar concept outside of Primary Care specialty that used more images in their daily workflow. So we worked with a Physician to craft a Pathology speciality case. In order to do this, we needed the application to support images in the “Patient Info” and card answers sections of the app. Here is one of the stains shown in an answer card:

stain-shot

In the coming weeks, we will be encouraging Pathology educators and learners to sign up and give it a shot! Our experiment asserts that these students will convert at or better than our baseline measurement from the previous weeks.

Social Group Interview at Bellevue Hospital, NYC

We wanted to do some interviews with educators and learners outside of Mayo to keep our minds grounded in the larger marketplace. Through personal connections, we ended up presenting at a NYU educators innovation monthly meeting. It was not exactly the context of 1-on-1 customer interviews we had done in the past: we had 1 hour, and 16 participants. We were in for a lively session!

IMG_4211-e1464888635824
We did not want it to devolve into a presentation, or pitch. We also didn’t want voices to be dominated or remain silent in the face of so many people. So, we broke the group into 4, and each of us handled one group. While 1-on-4 is still difficult to manage and take notes, we think this worked out great. We were able to witness these groups interact with OnPar, and each other simultaneously. It gave us a lot of insight in how the app might fight socially among many users.

Because our notes covered all manner of topic, we decided to conduct an Affinity Synthesis to organize our notes:

IMG_0263-exported

Usability Testing with Learners

In addition to the group-setting, we also sat down with five individuals either in medical school at NYU, currently in residency, or post-residency. With our recent mobile updates (that also adjusted the interface for desktop users) we wanted to watch people use OnPar to test the efficacy of the interface. We discovered many user interface breakdowns, small and large. For example, the game board on mobile behaved weirdly when the user tapped and held. This turned out to be a technical issue with mobile Safari and Chrome, and we resolved it quickly.

At a higher level, we got a lot of confusing and sometimes negative feedback regarding the wording used in cases and cards. Originally, the visual size of the small square card severely limited our character count to well-below the size of a Tweet. Now, the new design can hold longer and more-detailed descriptions. We are now planning to work with a medical writer to improve the cases within OnPar.

 

July 1st, 2016

Experiments, Metrics and Learning

By mary maryjpo

Background

In our last post about lean customer development, we discussed how the Mayo Education Innovation Lab could develop a process to design and build a business for one of the Lab’s first prototypes, OnPar. To this end, we prioritized the three early risks as:

  • Could we sustainably get cases to distribute?
  • How much would people use it?
  • How would people find out about OnPar?

If we could identify and rudimentarily measure how people are interacting with OnPar on these three points, we could help zero-in on the right balance to achieve product/market fit.

Metrics the Lean Startup Way

Originally popularized by Dave McClure’s lightening talk at Ignite Seattle, AARRR is an acronym that separates distinct phases of your customer lifecycle.  Often referred to as the Pirate Metrics (get it?  AARRR?), the phases are defined as follows:

  • Acquisition: users come to the site from various channels
  • Activation: users enjoy 1st visit: “happy” user experience
  • Retention: users come back, visit site multiple times
  • Referral: users like product enough to refer others
  • Revenue: users conduct some monetization behavior

Measuring Acquisition & Activation

Building on our paid-marketing experiment our previous blog post, we wanted to also to run experiments to measure a possibly significant source for new users: referrals from educators. In our customer interviews over the weeks, we see a lot of recommendations being made for various formal and informal apps, websites, books, journals, and other materials from educators to learners. We wanted to figure out roughly: what percentage of people would sign up when presented to OnPar?

acquisition

To measure this, we developed a custom-link utility so we could send different links to different people to measure whether someone referred by a specific educator land at the site, sign up and create an account.  If they land at the site, we consider that Acquisition.  If they sign up and create an account, that is Activation. In two weeks, we invited Educators, Program Directors, and Practicing Clinicians to invite some of their learners to OnPar. In 30 days, we got at least 90 people to land on the invitation links, and 30% of those people completed their first case.

We have two major suspicions for the large drop-off after landing:

  1. The website is not-mobile friendly after you create your account, and you are not allowed to see a case
  2. The signup itself is offered up-front as the first thing they see

Over the last month, and from our first OnPar release, we have seen many visitors arriving on mobile devices (phones and tablets). We decided to go ahead and make the game mobile-friendly for these users, so they would not immediately have to leave the application.

Other ideas for improving this step, are in relation to the sign up. More advanced experiments could involve us moving the sign up until after the visitor completes a case. Right now, the sign up is the first screen they see after landing, and we suspect that it is unclear what and why the user would sign up (other than on a leap of faith). This could be improved to aid in the acquisition rate.

Measuring Retention

Because of the case-based nature of OnPar, we wanted to measure and begin to experiment on what it will take people to return to the app and use it regularly. We decided to run an experiment to issue a new case every week, and email announcements to our subscribers.

pageviews-over-month

This graph shows relative bursts of pageviews (Y-axis is hidden, but it is “pageviews”). What we saw was a bit expected in broad strokes:

  • A burst of activity when email went out
  • A traffic falloff down to near zero after a couple days

This is exciting because it demonstrates the engagement and retention that we can build on. As we improve our analytics capability, we will be able to measure more specific information (such as, cases completed).

Measuring Referral


Screen-Shot-2016-05-24-at-12.01.50-PMAnother possible source from new learners is an opportunity for learners to tell each other about OnPar. In order to begin to measure a baseline around this and possibly experiments to improve, we put in a button (formally called a call-to-action) after the learner completes a case. The simple widget will allow learners to send an email containing a OnPar's URL to their colleagues and friends. Of course, we're going to setting up tracking which will enable us to measure how many visitors come to OnPar in this way. In the next blog post, we will update the results of this and other referral experiments we're conducting.

Case Creation

Since OnPar is all about exposing people to real-life cases, we wanted to explore how we could go about acquiring cases. For starters, we asked our personal and professional networks for people who could create cases from their experience. While the outpouring of help and engagement from the Mayo community was exciting and motivated, we can't count on being able to tap our networks if the app scales beyond what we have now.

We set out to try a couple ways to get cases from educators:

  • Offer credit / reputation to the educator for making a case for OnPar
  • Offer financial compensation for the time to create a case
  • Improve the case-creation interface to make it as easy as possible

While we did see lots of desire to help create cases for OnPar, the reality set in that the individuals we need to make cases are often very busy and have many commitments outside their normal work. We have recently started experiments to test what an appropriate level of compensation to a physician creating a case for OnPar.

Although we think that a better online case-creation interface could help the issue later, we are de-prioritizing it and taking the work of using the interface onto our own shoulders. The technical and design costs to building it can be delayed until later.

Cases for Different Specialties

As part of our stated goal to reach 25% of physicians in the entire medical community, we knew we should be careful not to fixate on any one particular group or specialty too soon. To this end, we wanted to make OnPar usable for other medical specialties, especially for ones that use more imagery such as pathology, and radiology.

We continue build on our minimally viable product features  - including to support images -- and hope to release this feature in our first case soon.

Please comment below or directly to us through email.  We value your suggestions, thoughts, ideas, critiques.  Your feedback is truly critical to our success.

July 1st, 2016

Education Innovation - We Begin Again

By mary maryjpo

As we re-ignite our efforts to test out how the Mayo Education Innovation Lab could build new products, we recently kicked-off a second phase of our process. The first phase of the Innovation Lab focused on a Design Thinking process to create several prototypes, one of which showed promise. During this second phase, we started to design and execute a process to validate and build a business around this or a related product. We engaged with a product design and development studio called thoughtbot to partner with us during this phase. Abhi Bikkani and Jeannie Poterucha Carter are continuing on the project from the Mayo side, as we further developer our innovation toolbox.

We’ve had a lot of great feedback and help from many people throughout Mayo in various disciplines -- for this we're very grateful!

OnPar Prototype Success

Building on the positive signs witnessed in the “Cases” prototype, we had developed and launched OnPar (http://getonpar.herokuapp.com/), a game prototype for healthcare professionals to engage and learn with real-life cases. We immediately witnessed a couple promising signs:

  • The gameplay dynamic elicited excitement and seemed to be a genuinely fun way to learn with real-life cases
  • Visitors shared the link to their colleagues, virally increasing the traffic beyond our initial messaging campaign

We believe we addressed a compelling topic in an interesting way, and it resonated with our community. In order to capitalize on these indicators of traction, we wanted to take it to the next level and view the idea through the lens of a product and a business.

Goals

Building on our learnings from the previous ideation, prototyping, and testing cycle done with Neo, we began a new phase to determine the viability for a potential product similar to our prototype. We have several goals for the phase that aim to deliver benefit to Mayo, the health practitioner community at-large, and our innovation lab process arsenal:

  • Use Customer Development and Lean principles to discover a product-market fit for OnPar
  • Further develop Mayo’s toolbox to include Lean validation methods to work complimentary to our user-centered discovery process

We kicked-off again in mid-April at our Gonda building location to set the course for the project.

Introducing the Customer Development Process

In the first phase, we used a Design Thinking process to uncover unknown pain points, desires, contexts, and life goals of our research participants. That led us to some amazing ideas and product visions, resulting in our OnPar game prototype.

Armed with ideas grounded in reliable data, we transitioned into building a business with what is known as Customer Development. Although the two approaches seem similar, they differ in subtle ways.

Steve Blank, author of The Startup Owner’s Handbook, and a leading proponent of Customer Development, sums up the differences:

  • Moving with speed, speed and did I say speed?
  • Starting with a series of core hypotheses – what the product is, what problem the product solves, and who will use/pay for it
  • Finding “product/market fit” where the first variable is the customer, not the product
  • Pursuing potential customers outside the building to test your hypotheses
  • Trading off certainty for speed and tempo using “Good enough decision making“
  • Rapidly building minimum viable products for learning
  • Assuming your hypotheses will be wrong so be ready for rapid iterations and pivots

- Steve Blank (https://steveblank.com/2014/07/30/driving-corporate-innovation-design-thinking-customer-development/)

Working together over the coming weeks, we will assert hypotheses and get them validated or invalidated by putting experiments in front of our potential customers.

Business Model Canvas

To get started, we created several iterations of a Business Model Canvas in individual and group settings. A Canvas is a template to organize our vision for the business and how it offers values to the problems of the customer. It conspicuously focuses on the customer’s problems and our value offerings, instead of a concrete product. This enabled us to view how our value propositions connect to the problems people have in real life.

These are just a few images of the individual canvases we did while brainstorming.

File_003 IMG_8869 IMG_8874

 

 

We worked together and consolidated down to two representative Canvases:

1.  Knowledge Gaps

MCOLi-canvas-knowledge-gaps

A model addressing the needs “Educators” and “Learners” have regarding knowledge gaps and exposure to a breadth of cases.

2.  Maintenance of Certification Credits

MCOLi-canvas-moc

A model addressing the need for practicing clinicians to earn MOC and Continuing education credits, and the potential to offer a unique way to obtain these credits.

(Find the free digital tool used to create these canvases at http://leanstack.com).

Qualitative In-Person Validation

The simplest way to validate a hypothesis is to speak with (potential) customers and put forward your ideas and measure their reaction. This typically involves sitting down with a customer and speaking with them for 30 minutes to one hour and presenting them with your hypotheses. We posed our problem statements and value propositions to the participants. All participants were educators in some capacity (Program Directors, Course Directors, Orientation Directors, Faculty, and practicing clinicians.)

A positive reaction from a participant during a conversation is not necessarily predictive of actual behavior, but a negative reaction could be an indicator of friction or conflict. By posing some of our ideas to the potential customers themselves, we were able to catch any large pitfalls we might have otherwise missed in a later experiment.

Once we are comfortable with our hypotheses’ gut-checks, we move onto more robust experiments which take more effort to construct.

Qualitative Signup Experiment for Learners

IMG_8882

To follow up on the exciting traction from the OnPar prototype release, we felt we needed further validation that learners would sign up for OnPar in the wild. So, we decided to create a Landing Page experiment by taking the following steps:

  1. Create a single-page publicly-accessible landing page presenting 3 values provided by OnPar
  2. Ask for an Email signup in exchange for access to OnPar.
  3. Create a Facebook ad targeted to a particular segment (in this case, we chose U.S. medical Residents at large) and drive a few hundred of them to the landing page.
  4. Measure the proportion of visitors to email signups.Hypothesis: if people are genuinely interested, they will feel comfortable enough to provide us with their email.
  5. We currently consider 25% a success, but without a baseline, we may adjust this definition of success (e-commerce industry is about 1.5%, but not very analogous to our experiment).

trello-residents-sign-up-expirament
We created an Experiment card in our Trello board for tracking its progress:

We chose the narrower potential Learner customer segment to make our task more straight-forward for ad targeting, but we are by-no-means focused just on this group.

Screen_Shot_2016-05-04_at_6.01.18_PM

After building the landing page with email signup, we used Facebook’s ad targeting utilities to identify a pool of approximately 15,000 Residents around the US.

Below is a portion of the landing page, with the call-to-action to provide an email. We are kicking off the campaign this weekend, and look forward to sharing the results in an upcoming blog post!

onparapp-lander-experiment

Assumptions Board

We also monitor our assumptions, so we can try to test them. Assumptions are often hard to spot, and it takes every team member to recognize when we appear to be taking an unsaid one for granted. So, when we discover one, we put it up on the board and subsequently attempt to form an experiment to validate or invalidate it.

We would add to this assumptions board on-the-fly as we were doing other activities, and identified one.

assumptions-1assumptions-2

Please comment and share your ideas, your leads and your enthusiasm!  We look forward to hearing from you!

 

July 1st, 2016

Innovation Lab Open House

By mary maryjpo

Last Tuesday, Nov 17th, the innovation team held an open house in Rochester. Over 3 hours, we estimate roughly 200 people came through.  The energy was high. The discussions were great.  Below is a virtual tour of the content of the event for those who could not attend.open_house1

open_house2

INNOVATION LAB OPEN HOUSE TOUR

We have captured the wall displays from the open house, with some explanation for each.  Given the layout of the room, the flow actually went counter-clockwise, so when a photograph has multiple components, it is likely working right-to-left.

open_house3

Part 1: The purpose of the current 16 week project is to help Mayo Clinic decide whether it wants to invest in an ongoing innovation studio for the Education Shield.  Do these methods work for Mayo Clinic culture? Do the kinds of people Mayo Clinic might hire fit with the organizational DNA? Our general ethos is always to test an idea with a smaller amount of money before investing a large amount of money, and we are doing it with the innovation studio itself, not just the products it works on.

Another key part of our ethos is action over discussion.  We need to test out the innovation studio by doing the work, because that is the only way to get real data for a decision.

We also reminded people of the high-level goals for the innovation studio, set at the start of the project and which remain a useful north star for our decision making. The big three: we wanted to have a direct connection to patient outcomes through our educational efforts; we wanted whatever we created to impact 25% or more of healthcare professionals in the USA; we wanted whatever we created to be breakeven or better by year 3.

open_house4

Part 2 and 3: The team consists of two major parts: 1. the board, which provides accountability, strategic alignment, and resources; 2. the team, which chooses the ideas and does the actual execution.  The team is cross-functional, composed of design, engineering and strategy, and led by an entrepreneurial lead.  There is actually a third major component to the team not shown above, which is the expert advisors we surround ourselves with.  These people are an essential source of inspiration, reality-checking, and domain expertise.

On the left, you will see a rough process flow which describes the arc of the work over the first 8-9 weeks: we learned about Mayo Clinic and healthcare professionals, we ideated and chose an idea, we broke out our key assumptions and risks and tested the key ones, we built a "minimum viable product" and then synthesized our results.  The key point we made to people when showing this diagram is that, while the illustration looks linear, it is really a whole bunch of really tight loops.  At the early stage, we try to wrap everything within "learn, then build, then measure, then repeat" cycles.

open_house5

Part 4: Here are a few key ways we work and think. Action over meetings. Outcomes (results) over output (deliverables, features). Data-informed, not data-driven.  Scrappy doesn't mean crappy. The items on that wall mean a lot to the team.

open_house6

Part 5: We kicked off the project in week 1 with an inception and a huge amount of customer development (1-on-1 qualitative research).  We had an early hypothesis that NP/PAs would be an important early customer of ours, and spoke to about 30 of them. However, we also have tried not to limit ourselves too tightly, talking to nurses, doctors, residents/fellows and more.  On the left of the photograph, you can see some of the "dump and sort" exercises we did to make sense of patterns we saw.

open_house7

Part 6: Another essential step during week 1 (any beyond) was to talk to as many experts as we could.  We visited CFI and the Sim Center. We dug into Dr. David Cook's research into learning methods and explored the innovative work of Dr. David Farley and Dr. Farrell Lloyd. The list of people to thank was quite long, and we have really valued the generosity of many people at Mayo Clinic.

open_house8

Part 7: In weeks 2 and 3, we tried to get lots of ideas out of people's heads. This was done with a mix of structured and unstructured time.  One of the structured exercises we like is called a "design studio" (or a "charrette"), in which the team picks a topic and then individuals sketch as many ideas as possible  can within 5 minutes. We then converge around and refine the best of those ideas.  Examples of topic areas were: "how could we take Ask Mayo Expert to the rest of the world?", "How can we get around the scaling limitations of a physical Sim center?"

However, we also believe in unstructured time for creativity. Each member of the team was encouraged to explore areas of interest and bring back their research and ideas to the rest of the team.

open_house9

The last part of ideation was to filter our ideas. Most ideas dropped to the cutting room floor, as you would expect, but we had 6 that survived.  We had 9 key filters for prioritizing ideas, which range from our ability to test it in 4 weeks (important for the purposes of this 16-week trial), to the direct connection to our north star principles.

Out of those 6, we actually ended up in a tug of war between two ideas, and so to tie-break, we challenged the team to pick either idea and "stretch" it, to take it to the next level.  Out of this came the idea we ended up choosing: Cases.

Cases was (is) a mobile video case learning and discussion platform.  It allowed people to create an invite-only peer learning group, and then share short video cases and short video responses to those cases, all on their smartphones.

open_house10

We sketched out how people learn from cases today, and what a new approach might be utilizing smartphones.

The next essential step was to break out our assumptions around the idea: not just what the product was, but who it was for, what their goals were, how we acquired customers, how we made money, etc.

Then we asked a critical question: what assumptions do we have, that if proven wrong, would cause failure?  Out of these assumptions we can spot our big risks: things that feel highly impactful and also highly uncertain.  From here, we started sketching experiments.

At this point, we are at the end of week 4.

open_house11

Part 8: The team ran a number of experiments around Cases, some big, some very small. For example, we didn't know whether participants would be able to easily create short video cases. For this, we ambushed a few (targeted) people in the halls of Mayo Clinic and asked them to try to record an interesting 1 to 2 minute case, right there on the spot. And people were able to do it!

During this time, we also did a lot of market and competitive research.  We learned that there was a lot of case-sharing activity on the Internet already, but mostly in the form of text discussion boards, or "virtual patient" engines.

No one seemed to be exploring video or private peer groups. Was this because it was a bad idea, or because it hadn't been considered?

open_house12

Part 9: One of the concepts we wanted to get across is what we refer to as the "truth curve," highlighted in Giff Constable's book Talking to Humans. The essential point is that you only get indisputable proof about an idea once you have a live product in the market -- either people are using it, or buying it, or they are not.  However, you should not wait until that point. You can gather insights and data far earlier, but it requires using your judgement to interpret results.

open_house13

Part 10: We believed that we could very quickly create a working prototype of Cases, and truly test it out in people's hands.  Eric Ries coined a phrase "minimum viable product," and our interpretation of what MVP means is: 1. the smallest thing you can make, 2. that you hope is usable, desirable and valuable, 3. and which feels like a product to the user (even if it isn't real behind the scenes).

open_house14

Part 11: One of things our designer did when designer the Cases MVP was map out the "user journey" in terms of people goals and possible actions that mapped to those goals.  However, the important thing to note is that we did not attempt to implement all of these features or view this as a list of "requirements."  Rather, we cherry picked the most essential elements that felt minimally required.

open_house15

Part 12: In early experiments, we often fake a product and run the "back end" manually. These are called "wizard of oz" experiments for obvious reasons!  However, for Cases, we didn't need to fake the back end.  Using a combination of custom software code, open source frameworks, and cloud services for infrastructure, we were able to make a fully functional version of the product in under a week.

At this point we were around the end of week 6. During that week, we had also recruited our initial testing groups for the product.

open_house16

Part 13: When we run an MVP experiment, we have to be willing to watch our experiment fail, but that does not mean we want it to fail, or that we are willing to let it fail for silly reasons.  We want to test VALUE, not usability!

When we first put Cases in people's hands, we hit some immediate usability issues (as one usually does).  Our first onboarding flow was confusing, so we fixed that in about an hour and pushed a new version live. We also had a problem where people were opening up their email invitations on their desktops, but they could not really run the application on their desktop.  So we quickly added a feature that let them SMS a link to the application from their desktop to their phone. That seemed to fix the problem.

open_house17

Part 14: We went into our MVP experiment with some quantitative pass/fail targets. We instrumented the product to ensure that we could track how it was being used, and if we were hitting our targets.

In the photograph above, you can see the results we were seeing from each team running with the product.  The bulk of the data came from weeks 7 and 8.

We had also taken the MVP out to 9 other healthcare institutions outside of Mayo Clinic. While we generated interest, we did not get any true takers, which was a bad early sign in terms of urgency of problem and our initial value proposition.

open_house18

Part 15: In week 9, we synthesized our results.  Interestingly, we kept on having new groups hear about Cases and ask to try it out.  We are currently watching how they are using the platform.

However, our overall conclusion from qualitative and quantitative data is that video responses provided too high a barrier to entry for users. Video did not fit into their daily habits and was too public for many. However, the appetite to capture of the value of case discussions appeared to be high amongst groups and is worth exploring further.

open_house19

Part 16: For the purposes of this 16-week trial project, we are not going to iterate Cases, even though there are interesting directions to take it, but instead are going to explore a new new idea.  We asked attendees of the open house to write down *their* ideas for where we might go next. Above is what that board looked like at the end of the day.  Lots of food for thought for our team!

open_house20

Part 17: We also asked attendees to vote on a whiteboard as they exited the open house. We asked two questions: 1. do you like how we are approaching innovation? 2. Do you think Mayo Clinic should invest in education innovation?

The signal was clear that people thought Mayo Clinic should invest in education.  It was largely positive, but not universally so, towards our approach as well.  Frankly, we really appreciate that honesty.  We know we are not perfect, but our entire approach is to do our best, learn as we go, continually improve, and be transparent as we do so.

That concluded our open house.  We hope you have enjoyed this post-even walk through, and as always, send comments our way.

The last part of last week (our week 10) was focused on choosing our next problem space. You'll hear more about that next week!

July 1st, 2016

Week 13 - New Prototype - On Par

By mary maryjpo

It’s the New Year and the prototyping phase of our trial run is coming to a close in 3 weeks! We’re determined to make the most of them!

Those of you who have been following along know that we hit the reset button right before the holidays. We had put 2 weeks of effort into exploring 360 degree video as a new learning medium. The user experience was not strong enough, and the fit was not great with the team. We decided to kill the effort and work on something new.

As Tim Gunn of Project Runway like to say, it was a “make it work” moment.

13_1

We flew the extented team to Rochester for several days. We started out by reviewing and synthesizing everything we have learned so far in this project. In a mind-mapping exercise, we focused on the difference between amateur and expert learners. We knew that clinical reasoning was hard to teach. We believed that, in order to build on AME, we needed to get to that “gisty” or “system 2” thinking that emphasized pattern recognition.

13_2_gist

As a team, we did a design studio session where our constraint was to teach gist thinking. We then went to refine our ideas, and looked at games like Lumosity and Elevate. We were interested in how doctors mentally sort important versus unimportant data, and did some research on mental agility and cognitive processes. Dasami modeled out how people think through information in a clinical setting and started thinking about correlaries to games like solitaire. We spoke to our advisors Dr. David Cook and Dr. Farrell Lloyd about their research on system 1 versus system 2 (Dr. Cook), or verbatim vs gist (Dr. Lloyd) ways of thinking.

That was our Tuesday. By the end of the day, abstract thoughts had turned into a game we call On Par. It is a game that challenges you to perform efficient, diagnostic pattern recognition.

For an educational game to work, it needs to be both educational AND fun. That can be a hard bar to hit. We needed to see if we were on the right direction, so we jumped right into paper testing.

We sat down with Dr. Lloyd and Jane Linderbaum and fed a real patient case into our initial ideas for a game system. We build a first version of the game with index cards, paper and image print-outs (which you can see below).

13_3

We then grabbed as many doctors as we could and had them start playing. As we watched people play, we were able to see what was working and adjust the rules system accordingly.

We need to create something with a simple rules system, effective game dynamics, the right level of intellectual challenge, and a direct connection to our goals of using pattern recognition to teach system 2 thinking.

The advantage of paper testing is that it is extraordinarily easy to iterate. By the end of the second day, we were convinced that we were onto something. Doctors, NPs and PAs were having fun and wanted to play more. The subsequent weeks of testing have only reinforced that goal.

13_4

Paper testing with Dr. David Cook

Our current task is to translate the paper version of the game into a simple digital version (with a small enough feature set that we can build it in under 2 weeks) and design a compelling initial set of cases that can seed the game.

Next week, we’ll explain a little bit more about how the game works, and soon we hope to put a working version in your hands.

July 1st, 2016

Week 11 & 12 - Our Attempt at Prototype #2

By mary maryjpo

The last two weeks for the Mayo Clinic Education Shield’s innovation team were a roller coaster. Within the span of those two weeks we chose an idea, ramped up to test it, and then promptly killed it. It was a shock to the system for some, but the right thing to do.

Back into the Fray

Towards the end of November, we held our open house in Rochester and then needed to immediately pick a second prototype. We do not advise jumping so quickly into a new idea without more research and exploration, but the budget constraints for this trial program gave us a limited time frame. Our goal was to make the most of it.

There were a number of missions that rose to the top of the list:

  1. Bringing more Real Life Experiences to Learning
  2. Overcoming cultural bias (and the impact that bias has on medical treatment)
  3. Team-based problem solving for patients with multiple conditions (inspired by Dr. Victor Montori)

At first, the team was very interested in #3. Mayo Clinic has a global reputation for excellent team-based care. Could it better teach this to the world? The implications were interesting, but after a day of analysis and talking to experts, we decided the complexities, dependencies, and blockers were too great for the 4 weeks we had for the prototype.

Bringing more Real Life Experiences to Learning

We shifted our attention to #1, the problem of bringing more real life learning experience opportunities. We had repeatedly heard that people loved the simulation experience. Sim center learning is well planned and highly interactive.  Teams felt that those experiences, although not available regularly, were the best experiences they’d ever had.  

Was there a chance to bridge the huge experiential gap between the immersion of physical simulation centers and the detachment of traditional online learning? Could we create something new that would deeply engage people in a scalable and affordable way?


12_warnerThe questions seemed worthwhile both from a “user need” and corporate cost savings perspective (and not just within Mayo Clinic). But how to solve? We viewed augmented reality as the most promising technology, but still several years away. Virtual reality had too many drawbacks in terms of creation costs and the quality of realism in the experience.
360 degree video, on the other hand, was a recently commercialized technology that seemed relatively inexpensive. In theory, practitioners could access an immersive 360 degree experience with a $5 Google cardboard viewer and a smartphone. The content itself could be created with relatively inexpensive GoPro cameras and off-the-shelf editing software.

We thus had a problem to chase, and a potential solution. We then split the team to tackle certain tasks and questions:

  • We designed a detailed experiment that would let us A/B test 360 degree video against traditional video
  • We reached out to doctors and NP/PAs who could help us try out the technology with some custom content
  • We researched the state of the technology, the viewers, cameras, editing tools, players, etc.
  • We turned to a virtual reality production company in San Francisco for advice
  • We sketched out the business model for the idea

To be honest, the team was fairly split between the optimists and skeptics on this technology. The real question was whether 360 degree video was a gimmick, or if it really would offer a more immersive experience.

The good news is that we had a ton of interest from medical educators who wanted to work with us to test the new technology (thank you all!). But we also quickly hit some bad news:

  • The level of interactivity was extremely limited. In theory, one could do eye tracking and even eye-triggered interactions. Unfortunately, the state of the technology implied an expensive custom software project.
  • The initial 360 video tests we did of a patient interactions were, let’s be honest, really banal. Without the ability to walk or zoom, or interact, the tech felt useless. Quite the opposite of what we were going for.
  • The 360 video players for the iPhone were really buggy, which might have restricted us to Android phones.
  • The cost estimates for producing a really good 360 degree experience, with multiple points of view and with interactivity, could be quite high - maybe as much as $75K per 8-10 minute video. Added to the custom software costs, the business model was looking like a non-starter.

Last Thursday morning, we held our weekly decision meeting where we recommend a “persevere, pivot, or kill” decision. In the first week, we had a number of worries and/or objections to 360 video, but we decided to continue investigation. By week two, however, we recommended a “kill” decision.

Silver Linings?

While it was frustrating to stop work on the idea and start over, that is exactly what an innovation team has to do — at least if they want to be capital efficient. We did discover two interesting things:

Dr. Farley’s Magical Surgery Session

While some of our experiments with 360 video were boring, there was one experiment that was quite the opposite: Dr. Farley doing a pre-surgery session with his students. In his session, there is a lot happening around the room. What would already be exciting in a normal video becomes more so when you have the power to swing your attention around at will to Dr. Farley, the chest x-rays, the students working around the patient, etc.

Take a look for yourself, using YouTube’s viewer controls in the top-left corner of the video to pan around.

farley_360

Hulu of Medical Education

We never viewed 360 video as an end, but rather a possible stepping stone in a wave of technologies that should make remote simulation increasingly compelling. However, creation of content is only the first step. We need an effective way to distribute and monetize it. Right now, it feels like the top teaching hospitals are competing against each other. What if we created a medical education “Hulu.” Imagine having a single storefront where a healthcare team could find and buy great content on any topic from Mayo Clinic, Johns Hopkins, Mass General, etc. It’s an intriguing concept. We haven’t had time to investigate whether it has been tried already.

Did We Make a Mistake?

Two weeks of work down… the disappointment of killing an idea… a looming deadline to get a working prototype done and no idea what we were going to do instead… not a great feeling.

So, did we make a mistake?

Giff Constable, the lead on the project from our innovation partner Neo, argues no. We went after a real problem. We explored a new technology with a mix of optimism and skepticism, and went into it with our eyes wide open. We put less than two weeks of work into it. We probably could have run a couple of small experiments faster, but that would have saved us only a few days.

This is the roller coaster of innovation work. If you are trying to push the envelope (and have a business model at the same time), you have to expect that 10% to 15% of things will work. The failures hurt, but it is better to face them.

We’re trying to remember that the purpose of this trial is to help Mayo Clinic make a decision on whether to fund a real innovation studio. The expectation was not that we would come up with a product win on our first few shots at goal. Of course, that is the rational view. The reality is that everyone really wants a win now! And that means we have to dust ourselves off and keep trying.

360 video was not going to be it, so we pulled the plug. We don’t have much time left, but we do think there is enough time to try one more thing.

Stay tuned!

July 1st, 2016

Cases App: Technical Architecture and Decisions

By mary maryjpo

Keeping context in mind is crucial when making technical decisions for a new product. If the product is intended for production use by a large number of people, decisions for the technology stack need to take scalability, performance, robustness, and security into account. But when the intention is to launch quickly and test product’s viability, the preference shifts towards flexibility to change, speed of development, and faster deployment cycles. We kept these factors in mind when making decisions for the Cases application.

At this stage, metrics are crucially important, as your decisions are informed based on usage and signal from the user. It’s crucial for the system to gather and deliver data that’s directly relevant and useful to the product decisions you’ll make at the end of the experiment: Kill, Pivot, or Persevere.

We wanted to build a platform for Health Professionals where they can present patient diagnostic cases, and respond to these cases in a short video format. Our hypothesis was that we believed a video platform with interesting and short video cases will provide a good learning experience and viewers will respond to the cases with video.

One option was to use a platform such as YouTube or Vine and create a flow that will use these platforms. We decided against it because the lack of control meant that we won’t be able to gather the data we needed to make a decision.

We rather decided to go for a custom built Ruby on Rails application. Ruby on Rails is known as a web framework that’s optimized for developer productivity. Productivity and speed of development is crucial to this stage where as a product maker, you’re optimizing to get back usage data as fast as possible. Also, my familiarity with Rails as a Ruby as a developer was important because that meant I can be productive developing an application in that framework.

Since it’s custom built, we have a lot of flexibility options on the data we gather. For example, we were able to gather each time the user logs in, and how many cases were viewed by each user. Gathering these metrics would have been challenging when trying to test the product with existing third-party products.

We also stayed away from iOS and Android as a platform of development choice for testing the Cases application. Even though we wanted to build the application primarily for mobile use, we decided against developing on iOS for the following reasons:

  • Developing an application on Android and iOS frameworks takes up more time than web based applications.
  • The deployment and customer acquisition methods require more technical knowledge for the customer.
  • It takes considerable time for Apple to approve the app into App Store.

Technical Architecture

These are the following pieces of the architecture that served up the Cases application:

Ruby on Rails Application

The Ruby on Rails application served as the most important piece of the architecture. This handled the backend, data persistence, emails, and front-end pieces. The database engine we used was PostgreSQL, an open source software known for its advanced features and robustness. It’s also preferred by Heroku, our cloud application hosting provider. We also used Ratchet CSS Framework, which provides pre-built stylesheets to mimic the experience of mobile apps on iOS and Android.

Zencoder

Videos are complicated to handle, because of encoding and the vast number of different input and output formats. While there are open source solutions such as FFmpeg that that can handle encoding, we wanted to make sure this piece is handled with as much reliability as possible. Our experiment would have drastically failed if for some reason, the encoding stopped working. We opted for Zencoder as a third-party paid service for encoding videos and creating image thumbnails. They charge $40/month for 1000 minutes of encoding. They have a REST API and a nifty Ruby gem that handles encoding easily. We also get information such as the duration of the video and responses from Zencoder.

Sidekiq

To make sure we provide the best user experience possible, we needed a queue based background job processing service that can handle the dispatching and management of videos to Zencoder in a reliable way. Sidekiq fit this bill perfectly, it’s an efficient background job library for Ruby applications that’s built for concurrency and performance.

Amazon S3

Amazon S3 is a cloud storage solution by Amazon Web Services. Serving lots of large files reliably is what Amazon S3 is known for. We used it to upload the original files, save the encoded videos and image thumbnails. These files were later served by Carrierwave and Fog in the application.

Github

We used Github to manage code source. Github is a web application built for the Git source control system, and has lots of collaboration features. For a lot of front-end fixes and copy changes, we used their pull request feature to incorporate changes directly from the other team members. This freed up time to focus on coding backend pieces.

Heroku

As mentioned, we used Heroku as our hosting provider. Heroku is the easiest way to not only deploy Rails applications, but applications built on other frameworks and technologies as well. Since it was important to stay focused on testing the viability of the product, deferring the setup of a custom deployment server helped a lot. We were paying $14/month for each instance of the Cases App we launched. That was more than adequate for our needs. Also, deployment is just a one command away, which means no restarts, no configuration change, or sacrificing user experience by showing the maintenance page.

 —

It’s fascinating how using proven engineering frameworks, robust open source software, and inexpensive but reliable cloud services, teams can build and launch products so quickly and cheaply, and test them out as real products on real people.

This is a guest post from Rizwan Reza, Principal Engineer at Neo and the MCOLi project.

November 11th, 2015

Week 8 - Prototype 1 Synthesis

By mary maryjpo

Two weeks ago, the online learning innovation group put out their first working prototype, called “Cases.” Cases is a mobile app aimed at continuous case learning. It lets healthcare professionals form a group of peers, upload short video cases, and discuss those cases with video responses.

We built an MVP (“minimum viable product”) of Cases in a handful of days. Then we put it in the hands of 5 different Mayo Clinic groups. We also brought the concept (and a working demo) to 9 different external organizations to test for interest.

7_2-1024x543

Launching new products is often a humbling experience, and this was no different.

In terms of external interest, we’ve had plenty of “that’s cool!” but no takers yet. Which is a signal that our ratio of “perceived value proposition to assumed work required to use the app” (no, that’s not an official ratio) is out of whack.

In the leadup to launch, we had been pleased with how easily our group organizers could 1. think of who they wanted to discuss cases with, and 2. create a couple short video cases. But upon launch, we immediately hit some problems. Our metrics clearly showed that invitees were not joining the application. Those who did join were not engaging in discussion.

9_running_experiments-1024x572

The Desktop Wall

One problem was a straight-up usability issue. People were getting an email invitation to join their group. They were reading the email on their computers, not their phones. In order to get the product out quickly, we had optimized the application only for mobile and thus blocked desktop use with a message that asked people to open the email and/or link on their phones.

9_sms-1024x427That turned out to be too much of a wall, so we had to spend a couple of days cleaning up the desktop view of the application. Video still worked best on phones, so we put in a feature that let people send a text message to their phones with a link to the case they were viewing.

The Trouble with Video

People seemed to enjoy watching a 1-2 minute video case, but we were not getting responses. This could partially be attributed to the “no one wants to be first into the pool” problem that all community sites must overcome. But as we studied what was going on, we believed that video played a part.

One person didn’t film a video response because they had just left the gym and didn’t want to look unprofessional. They forgot to come back and do it later. Other people were hesitant to be wrong in front of their peers, and video either felt to heavy, or they didn’t like that they could not easily edit their response later. Other things get in the way as well, such as speed of playback, or background noise during listening or recording.

While there are very interesting trends happening these days with video and such apps as Periscope and Snapchat, this is primarily with younger audiences. Our conclusion from our quick test is that video is too far outside the comfort zone of most healthcare professionals today, save for the newest generation in the field.

Was Testing Video a Mistake?

Going into our MVP, we knew that video was a big risk. We had one advisor who straight up told us, “this will never work.” But therein lies the rub. If an innovation group never does things that people are skeptical about, it will never innovate. We have to take risks, but test quickly.

We also have to take smart risks. One could debate that qualification for video here, but as noted above, video is an increasing communication trend on mobile phones. It had the potential to humanize the interactions in the app and bring groups together more closely. There are also already text-based case discussion platforms out there. While the existence of a competitor is not reason to shy away from something, we did want to try to push the limits of the state of the art and see if it cracked open new behaviors.

Ultimately, if we fail, we just need to do it quickly, and try to learn as much as possible.

Persevere, Pivot or Kill?

Given what we have seen, our recommendation would be to do a hard “pivot.” We still believe that case learning is an interesting space. We still think that crowdsourcing content is an appealing model. We are uncertain about whether self-organizing groups can work, but we like the potential there to reduce customer acquisition costs.

We believe that the experiment has shown that video will not work, however we do not think that a text approach would be innovative enough. Our recommendation, if we were not already switching gears for a second prototype, would have been to stay focused on case learning but do a complete overhaul of the product design based on our lessons so far.

This Week

In week (9), we are synthesizing what we have learned so far, reviewing our quantitative metrics and doing additional qualitative research to understand how people viewed the app experience.

We are also prepping for our open house in Rochester, MN next Tuesday at 3pm to 6pm CT. Anyone at Mayo Clinic is welcome to join us, and walk through our journey so far.  Open houses will be hosted in Arizona and Florida after the first of the year.  Those sites will be able to see the full journey including Prototype #2.

We also have to come up with a second idea to prototype, and so minds are starting to turn towards new problem spaces and potentially creative new solutions.

November 11th, 2015

Week 7 - MVP - Minimum Viable Product Released!

By mary maryjpo

Week 7 for MCOLi (the online learning innovation group) was about putting our “minimum viable product” into the hands of real users.

7_1-1024x418To recap: in our initial research, we identified that case learning was the best and favorite way for healthcare professionals to learn. However, we believe that people are so busy these days that they need a more efficient, yet still effective, way to engage in case learning. One idea that emerged was a mobile-based product we called “Cases.”

Cases allowed someone to invite a set of peers to a kind of discussion group. People in the group could submit and review short-form video cases of 1-2 minutes in length. Then they could submit their own short video responses, taken with their phone.

7_2-1024x543In week 6, as described in our last blog post, we designed and built a working version of the product with a combination of custom-code and existing software frameworks. For week 7, we needed to get it in the hands of as many groups as we could.

Our initial trial groups included:

* Dr. Richard Berger’s hand group

* Dr. Farrell Lloyd’s hospital internal medicine group

* Dr. Badr Al Bawardy’s gastroenterology group

* Gayle Flo’s inpatient CV group (with an emphasis on NPs and PAs)

* Andy Herber “Teaching Cases in Hospital Medicine”

First of all, we would like to give a huge “thank you” to the group organizers listed above. They have all been generous with their time and supportive of what we are trying to test.

We also welcome anyone else to create their own groups! If you are interested, check out http://www.casesapp.co.

7_3-1024x733We have instrumented as much as we could. Some metrics we are gathering with software tools such as Google Analytics, but for others, we just manually count. With such small sample sizes and short time frames, we cannot take the metrics too literally, but they give us important directional insights. Here is our “board”:

As you can see from the numbers, the initial days of “launch” was not without bumps:

1.) We had the change the product “onboarding” design once due to tech constraints, and we are currently changing it again because our sign-up rate is far too low. We think that people are getting the invitation email on their desktop computers, but not then opening it again on their mobile phones. We were hoping not to support desktop usage for this prototype but it might be necessary.

2.) We were pleased with how easily our group organizers were able to make their video cases, but still most of them only had time to make a single case. Of the 9 invitees who actually signed up, they all reviewed that video case, but only 1 has taken the step of submitting a video response. Now we need to figure out if this is because of: A. they are uncomfortable with video (a distinct possibility and why we are testing video in the first place!); or B. no one wants to be the first one to respond (a common issue with community-based products); or C. we have a usability problem with the design; or D. something else we haven’t considered.

3.) Right now we are trying to encourage our invitees to sign onto the app and give it a shot, and making some simultaneous design changes to make this easier. (Aside: one of the advantages of creating the MVP as a mobile Web application, rather than a native mobile application, is we can continually make and push improvements, as opposed to have to wait days or weeks for App Store approvals).

In addition to the product experiment, we have also done some research into legal risk around this idea, as well as begun the logic exercise around business/financial model, which will then be translated into an Excel document.

Here is our latest status around our risks:

7_4-1024x615

For the rest of week 7, we are trying to get enough people to try the application so that we can get an intuitive read on whether the concept has legs, even if pieces of the product need to be changed. If we can’t, that is a bad sign, but our job is to test things against the world that is, not what we wish it would be!

November 11th, 2015

Week 6 - Prototype #1 Getting Ready to Roll

By mary maryjpo

IMG_51691-1024x768Week 6 for the MCOLi team (our shorthand for the online learning innovation program) was all about preparing to put real people through an actual product experience.

We have 4 weeks to test one of the ideas that emerged from our ideation phase. Week 6 was the second week of that 4-week sprint.  The idea we are testing is code-named "Cases," and it is a mobile video platform for sharing, discussing and learning from interesting medical cases.  

We had multiple activities going on in week 6, but for this post, we are going to focus in on our "MVP prep."

One of the best ways to test a product idea is to put customers through a close approximation of the real experience, and see what happens.  In order to do that for Cases, we had to accomplish a number of tasks:

*Recruit sponsors who were willing to both organize a "case discussion group" of 5 to 20 people, as well as record a few 1 to 2 minute video cases. Plus we wanted to get enough sponsors, and a diverse enough set of sponsors, that we could gather an interesting mix of data points.

* Build a live prototype version of our product concept in a matter of days

* Help our sponsors invite their group and measure how they actually used the product

Recruiting sponsors has gone well (it is great to be part of an organization where people are so willing to help each other). We have a diverse set of practice groups, roles, and levels of experience.  We have our initial cases (a side experiment proved that it is not too onerous for people to record a 1-2 minute video case on their mobile phones).


Screen-Shot-2015-10-26-at-2.15.19-PM-499x1024We also managed to hack together a workable prototype. A prototype is a proxy to a real product that helps us test the basic functions of the product when it is fully formed. Creating an MVP (minimum viable product) quickly requires a McGyver mindset:  "Let's see, we have a paper clip, an empty fire extinguisher, and an electric socket. That should be enough to make a working organ transplant storage receptacle!"

In our situation, we had a few core requirements:

1.  We wanted interactions to happen on mobile phones, since healthcare professionals do not work at a computer often (but we didn't think we had time to build a truly "native" smartphone application)

2.  We needed to be able to load short video cases made by Mayo Clinic physicians and NPPAs (but we didn't need our users to be able to load new cases)

3.  We needed people to be able to view cases posted for their discussion group (but we didn't need to build a robust security system, yet)

4.  We needed people to be able to take, and post, a short video of themselves answering something about the case (and we actually limited responses to video, not text, because we wanted to test if video discussion was a good, or dumb, idea)

Thankfully in this era of open-source software, there were a number of existing frameworks that we could cobble together to make a mobile-browser-based prototype that, while not as elegant as a well-engineered product, would at least get us close enough that we could answer some key questions.

  1. Would greater than 50% of invitees actually try out the product?
  2. Would > 30% of those who viewed a case then make a comment?
  3. Would > 40% of our participants find the experience educational and worthwhile?

We also hope to learn many, many other things:

  • Will people watch just one case, or will they be interested enough to watch more?  
  • Will the short bursts of asynchronous interaction work with people’s busy schedules?
  • How does group size, or practice group, or similarity vs diversity of group members affect interactions?  
  • If we are able to capture an interesting case discussion from Mayo Clinic participants, would that group be willing to let other healthcare organizations access their discussion (as watchers, not participants)? If so,  would other healthcare organizations find it interesting to view Mayo Clinic discussions?
  • Would they want access to Cases for their own discussions?

Our hope is that we have made a prototype that works well enough for our testers that they give it a shot. MVPs are complicated because we are hoping to build an experience for participants that feels valuable to them, while not over-engineering the experience and building feature that people do not find useful. All three of those letters matter.

In an ideal world, we would get around this by trying our MVP on one discussion group, and then roll it out to many.  However, our time frames do not really allow for that, so we decided to jump into the deep end of the pool with a number of teams in parallel and see how we fair. If we have made mistakes, we hope to make them quickly, and learn from them quickly as well.

We hope you are as curious to see the results as we are.

And if you are interested in trying the MVP and organizing a discussion group for some cases of your own, please let us know!

November 11th, 2015

Project Tools - Trello

By mary maryjpo

One of the fascinating things about this project is that we've been able to leverage a whole new set of tools within our project team. Tools like Outlook, Excel, Word, MS Project are considered enterprise and therefore old and outdated.  Instead our colleagues from San Francisco and New York are leveraging new and flexible tools.  These tools increase the ability for teams to track collaborative work and communicate quickly.  They promote working flexibly -- both the work itself and the worker's location.

Trello is one of the most flexible organization tools I've ever used.  It's so flexible that I am constantly thinking of new ways to leverage it -- both personally and professionally.

Screen-Shot-2015-10-21-at-2.49.23-PM-1024x509

Trello is organized as Boards (which I associate as projects), Lists, and Cards (which are separate items on each List).  Our MCOLi project has one Board.  You can see examples of our lists in the image here.  It's fabulous to see everything in one place.  Our to-do, in progress and shipped (finished) lists are especially important.  Anyone on the team can add items and assign them to people on the project.  Cards can be ordered by our product manager which shows the priority of things needing to get done.  Notice that some of the items on lists start to grey out as they don't get touched.
Screen-Shot-2015-10-21-at-3.05.27-PM-1024x949Each card allows you to assign team members, create checklists, set due dates, upload files.  If you assign due dates, you can also see every card on every list via a calendar view.

Think about how many times you're collaborating on a project with others.  Lots of things to do, accountability, communication, shared files.  Trello is a terrific way and has really served us well.

See a tour from their product website.

Anyone ever tried it?

November 11th, 2015

Week 5 - Idea Testing

By mary maryjpo

Week #5 for the Education Shield's online learning innovation group (code-named MCOLi) was all about zooming in on a particular idea and starting the process of testing it.

To recap, our research showed that cases are a compelling and efficient way for healthcare practitioners to adopt new working knowledge (this will likely not be a surprise for anyone). We believe that the discussion around cases can deepen learning on the content presented in cases, and our hope is that such discussion can be made asynchronous and online.

When we begin testing an idea, the first thing we do is run an assumptions and risks exercise.  We cannot test everything, so we need to figure out what is most important.

First, we lay out our current belief system for the idea -- what it is, who it is for, how it makes money, etc.  Then we ask a critical question: "What assumptions do we have that, if proven wrong, would cause this to fail?"

The next step is to loosely rank these risks by the level of uncertainty (if there is little evidence to work from, it is highly uncertain) and the impact on the business (high impact means it could seriously help or damage us).  The following graph might be too small to read, but it gives you a taste of the output of this exercise:

1_assumptions-priority.jpg-1024x576

Next, we map our experiment plan against the most important risks, like so:

2_assumptions-test.jpg-1024x576

And lastly, we try to keep a running check on how well our risks are faring given our experiments.  Things that are positive are flagged green, and things that are at risk of being invalidated get marked red. As you can see, we have just begun:

3_assumptions-status.jpg-1024x576

During the week, we did a fairly thorough competitive review, looking at everything from Figure 1 to QuantiaMD and various virtual patient applications. We also kicked off another wave of customer development -- this time broadening our range beyond NP/PAs to consultants, fellows and residents, RNs and physical therapists.

The noisy competitive landscape proved that there was interest and activity in the space, but made us want to attempt to leapfrog the crowd a bit. While we are hoping that access to fascinating Mayo Clinic cases will be a differentiator, that cannot be all we rely on.  We are thus testing out a product design risk and focusing not just on mobile but on mobile video for both the cases and the discussion.

Next on our plate is a working prototype that will allow us to share a case and see what kind of discussion does or does not unfold.  You can see a quick sketch below of the simple interaction:

4_prototype-images.jpg-1024x576

We have set three success metrics for this prototype:

  • More than 50% of invited participants will try the prototype
  • More than 30% of users who view cases will respond with a question or comment
  • More than 40% of users will state that they have learned something from the discussion of the case

Next up (underway as we write): first, recruit our first sponsors who will contribute a case and help us gather together an invite-only "team" for discussion of the case; second, put the prototype into the wild and see if it sinks or swims!

November 11th, 2015

Week 4 - Ideation

By mary maryjpo

IMG_2163-e1444943275904-1024x768Week 4 for the online learning innovation lab (MCOLi) was the final step before shifting to a test-and-validate mode around a specific idea. We gathered the entire team together in San Francisco (we are normally geographically distributed). Our mission was to prioritize our top ideas, select an initial one, and plan our first experiments.  We came into the week with 6 ideas that had survived an initial culling process, as shared in our last blog post.

Early on in the project, we came up with a set of filters to help us prioritize ideas.  Last week, we ran a simple group exercise where we loosely scored each idea against our key filters. You can see the results here (1=good, 2=OK, 3=bad):

IMG_5422-e1444943645681-768x1024Given how little we *truly* knew about the viability of each idea, the scoring was more useful for how it forced thoughtfulness and a mental judgement, rather than the output of the numbers themselves.

We also asked our advisors Dr. Dave Cook, Dr. Dick Berger, Barbara Baasch-Thomas, and Dr. Farrell Lloyd to critique the ideas.  After these steps, we found ourselves with two ideas: "Interactive Slideshare" and "Mayo Talk." However, we did not have consensus between these ideas. Quite the contrary.  The primary criticism of each idea (from different detractors) was that they were too incremental an improvement.

We posed a challenge to the team: take either idea and "stretch" it with more imagination.

IMG_5446-e1444943850213-768x1024

The team deconstructed the essentials of Mayo Talk and also went back to some of the fundamental insights we had picked up during our customer development interviews:

- case learning is almost universally everyone's favorite way to learn

- everyone is so busy, an effective solution likely needs to be extremely efficient

- case review, as we witnessed with some team rounds, are a lightweight, compact, information-rich way in which practitioners share and interact around knowledge

- many Mayo Clinic medical practitioners seek out new, interesting cases to learn from, whether through grand rounds, conferences, or other means

  • Mayo sees interesting/complex cases that other healthcare organizations would be fascinatedto see

We asked ourselves a question. If we took the case as the fundamental "unit of learning," could we create an effective, lightweight platform?  

The team was split as to whether a case-sharing platform would more interesting as an education tool versus a practice tool, but legal liability risk nixed the latter idea. We also knew that this could not just be about creating a library of cases, but rather creating interesting interactions around cases.

To cut to the chase, the team decided to choose the case learning concept as our starting point.  The summary of the concept, as it stands today, is as follows:

"Briefcase" is a platform where groups can form to load, view,  share, and interact around interesting cases. The cases would typically be 1-4 minutes in length, presented through a mix of audio, images and text.  We imagine that groups could be formed by practice teams, medical school alumni groups, national speciality organizations, and more.  The members of a group could interact around a case, but ideally anyone on the platform (which would include many healthcare organizations) could view both the case and the discussion in a read-only mode.  Our hypothesis for the business model is that group organizers would pay to set up and host their group.

We would be lying if we said that the choice of this idea was without controversy.  There are some big risks which we will get into next week.  However, one of our core principles is to strictly limit opinion-based debates, and instead gather real data. That is what comes next.

Our last steps for the week were two further actions:

1. Identifying the core assumptions and risks around the idea. This helps us prioritize our learning goals, and thus the experiments we need to run.

2. Imagining a wide variety of research activities and experiments, some simple and some complex, that we could run in order to gather validating or invalidating evidence.

As an amusing sidebar, one of the other activities of last week was to tentatively rename our little group.  We realized that MCOLi is a bit of a homonym to E. Coli.....

This week, we have started the process of putting "Briefcase" through its paces. -More to come next week! 

We also have a request for you! Please add a comment to this blog or email us if:

1. You know of existing case sharing or case learning tools we should know about

2. You have a team that might be interested in joining a trial for this concept. We will be picking one or more speciality/groups areas to first gather cases and then have people kick some tires.

November 11th, 2015

Week 3 - Research and Ideation

By mary maryjpo

Welcome to week 3 for the online learning innovation lab.  Our job this week was to continue coming up with ideas, one of which we will choose for a 4-week prototyping and validation phase.  While the week had a few expert and custdev conversations sprinkled throughout, a lot of it was heads-down research and product ideation.IMG_2138-1024x768A lot of ideas have fallen to the cutting room floor, but we ended the week with six concepts to examine and refine.  We might not end up using any of these ideas, but these six ideas are where things stood as of last Friday:


week3_interactive_slide_share“Interactive Slideshare”

Online learning suffers from two big problems:

It is either boring, or prohibitively expensive to produce.  People enjoy live conferences, however, especially when the talks are made interactive.  Could we make online content both more interesting and cheaper to produce?  “Interactive Slideshare” imagines an online platform where recorded talks from existing medical conferences could retain that interactivity.  For example, imaging watching a talk from a conference you missed. When the speaker pauses to ask the audience a question, the video would stop for you to answer. Then you can immediately see how your answer compares to the audience as well as to the speaker. If you scored well, you would get CME credit.

Some of the big risks:

- Would we need to create our own interactivity platform on top of powerpoint, or could we leverage a technology like PollEverywhere?

- Would conference organizers and speakers be willing to share their content?

- Would people be willing to pay for a subscription or access to specific content?

“Mayo Talk”

week3_mayotalk-1024x653There isn't a way to learn, connect and share knowledge with medical peers online that's engaging. Could this be done with video chat?  Webinar tools like Webex, Gotomeeting, and Adobe Connect already exist, but is there an opportunity to: 1. build community and 2. create an effective asynchronous experience?

The platform could support case study walkthroughs, roundtable discussions, "ask me anything" interviews.

Some of the big risks:

- We can excite practitioners to have discussions online

- We believe that users will make time to engage online

- We believe there will be no browser capabilities

- We believe that experts are willing to openly share their knowledge

“Interactive News”

week3_virtual-case-e1444267136190What if you could have a lightweight way to get relevant breaking medical news that was more engaging and educational?  “Interactive News” imagines an SMS-based experience that gives you targeted medical news in the form of an interactive scenario rather than an article.  For example, if the American Health Association released new guidelines on treating SVT, interested parties would get an SMS prompt on their phone to start a “virtual patient” scenario where they have to treat a patient according to the new guidelines.  They would be given background information, prompts to make decisions, and feedback on choices.

Some of the big risks:

- Cost-effectively staying on top of medical news and converting them into relevant and engaging scenarios

- Offering enough value that people are willing to subscribe to the service.


week3_beat_expert_sim“Beat the Expert Sim”

Practicing health care professionals often struggle with the ability to get evaluative feedback on their clinical decision-making. This is especially true for those in small practices where peer and expert advice and counsel are less obtainable due to time restrictions or practice size.  “Beat the Expert Sim” allows healthcare providers to view others performing relevant clinical practice in a simulation setting. They can provide realtime feedback on that care while watching the video, and then compare their feedback to peers and experts.

Some of the big risks:

- Coming up with premium features that people will pay for

- Viewer will be able to access all the information that they need in order to make those judgements - for example, the vital signs, temperature

- Enough content is being created that users will continue to be engaged

“All Things Considered”

At the point of care practitioners have the ability to find answers at their fingertips. Answers are great, however, a huge part of learning for a health care providers is being able to ask the right questions to their patients, other people on their team and themselves. Providing a level of expertise to your patients and to the rest of the team is being able to ask questions that could uncover aspects of the patient’s symptoms that would otherwise be left uncovered. ‘All things considered’ is a tool that alongside point of care would prod practitioners to ask themselves the tough questions. It is a simple format where the doctor or NPPA would scan questions to jog their thinking and perhaps even ask the patient additional questions. Imagine it as an expert mentor in your pocket asking you ‘Are you missing anything here?’

Some of the big risks:

- Getting doctors time to give us these questions

- We can have this as a plugin / alongside AME

- This can be a stand alone product without a point of care repository like AME or Up-To-Date

- NPPAs and Physicians will like this information enough to pay for it

“Case Journal”

Journals, though rich in content can be overwhelming for the busy and stressed healthcare provider. Being able to find the time to search for the information in journals that is relevant to your practice is time consuming. We heard that people love having access to information that is relevant to their practice.

We also learned that people enjoyed spending time reading content that is relevant and thought provoking, but don’t want to read all the content. Case Journal is a newsletter that provides physicians and NPPAs with content from academic and medical journals that is directly related to their practice. We use their cases they see with their patients as the foundation, on which we build our algorithm to find relevant articles. NPPAs and doctors subscribe to their personalized newsletter with abstracts and summaries of relevant articles that is delivered to their inbox on a monthly basis.

Some of the big risks:

- Real time access to EHR/patient cases to scrape for cases seen in the physician or NPPA's practice

- We can sell this as a library service to health institutions

- We believe targeted content is a compelling value proposition to the health practitioners

In Conclusion

None of these ideas are perfect, and we might find that during week 4 we nix all six of them.  However, our mission is not to spend months agonizing over ideas, but rather to pick a good starting point and start testing and thus learning.  We believe that success lies outside of our heads.

Next week we will poke hard at these ideas, come up with a few more, prioritize the best ones based on our filters (see our priorities from blog post #1) and ultimately choose the first concept to test.

Please share your thoughts, ideas, and encouragements!

November 11th, 2015

Week 2 - Customer Development

By mary maryjpo

Every week, the Mayo Clinic online learning innovation lab will be posting an update on their activity and progress. Here is Week #2.

IDEATION

The startup ecosystem likes to say, “Ideas are worthless, it is the execution that matters.” However, good ideas are still a critical starting point. To generate some interesting ideas on a deadline, we have engaged in an intense series of research activities and design exercises.

We interviewed many potential customers and experts. We reviewed active Mayo projects and assets like Ask Mayo Expert. We explored the broad competitive space and the general state of continued education for healthcare professionals.

Given that people are creative in different ways, we are taking a “diverge and converge” approach to our own team activities. In other words, team members have space to think, research and sketch on their own, and then we come back together to share and synthesize.

We also ran some structured design exercises called “design studios”, which are structured to get a lot of ideas out in a short time frame. One focused on new learning products that could be created using the content in Ask Mayo Expert. The second one dreamed up ideas related to the Mayo Sim center.

dasami-six-up.jpg-1024x576These exercises are meant to be fast and loose, letting ideas flow and trying to prevent the team from locking up by trying to be too perfect too early. To that end, everyone sketches.

LOOKING OUTSIDE OF MAYO

In Week 2, we also began to look outside Mayo Clinic’s walls for customer learning. We began by interviewing 4 non-Mayo NP/PAs. Interestingly, we saw more similarities than differences. Continued education is primarily done at conferences, and most everyone was given time and budget to attend conferences of their choice. We also have observed that NP/PAs like to change specialities during their career. One difference between Mayo Clinic and NP/PAs in other organizations could best be described as “envy” for Mayo Clinic’s team-based approach to patient care, and the benefits that brings in terms of knowledge sharing across the team.

GRAVITATING AROUND A FEW IDEAS

Weeks 2 and 3 are dedicated to brainstorming many ideas, and week 4 will be about choosing one to test.

This week we found ourselves circling around a few ideas related to video and some of the dynamics we have seen in team meetings, grand rounds, and conference interactions. We are also exploring cases approaches, new forms of simulation, the efficacy of virtual patient tools, and extensions to Ask Mayo Expert to help people discover what they don’t know.

WHAT IS NEXT?

Week 3 consists of further research, meetings with education innovators, refinement of Week 2 ideas, and brainstorming of additional ideas. By the end of week 3, we should have a narrowed down set of ideas, each with initial hypotheses around core elements such as value proposition, target customer, competition, business model, customer acquisition and more.

November 11th, 2015

Week 1 - Inception

By mary maryjpo

dasami-value-mapping.JPG-1024x621Over the last few years, the Mayo Clinic Education shield has been asking ourselves how we could take online learning for healthcare professionals to the next level. We've been asking ourselves whether Mayo Clinic can be online learning innovators, and have been exploring what this innovation capability could and should look like. Fast-forward to the present day: we have just kicked off a 4-month effort where we are going to test out how an innovation team would work. We will be ideating, prototyping and testing two ideas for online learning over these 4 months, and sharing weekly updates with you here.

Here is Week #1.

TEAM

Last Monday, we met in our rehabilitated mailroom in the Gonda building to kick things off. There are two dedicated Mayo people on the project, Abhi Bikkani and Jeannie Poterucha Carter, plus an oversight board including Dr. Mark Warner, Dr. Dick Berger, Scott Seinola, and Barbara Baasch Thomas. We also pulled in a specialist digital innovation firm called Neo to partner with us during the project.

GOALS

At a high level, our mission is pretty straightforward. We want to figure out how to make life-long learning for healthcare professionals more robust, accessible, efficient and effective.

We have a number of ambitious, high-level goals that serve as our north star. Here are three important ones, which we wrote in the form of headlines:

  • “Better medicine through better learning” — 50% of our customers (whether internal or external to Mayo) see better patient outcomes because of our service.
  • “Mayo creates the go-to learning system for medicine” — 25% of healthcare professionals in the USA are using our product/service.
  • “Innovation Lab pays its own way” — products from the lab have the potential to be break-even or better by year 3.

KICKING OFF

Our goal at the beginning of the project is to come up with a number of interesting product ideas. This can only be done from an informed position. We are developing our point of view through in-depth interviews of potential customers and expert advisors.

We have identified NPs and PAs as a growing, fast-changing, integral segment of the advanced care team which could be an interesting place to start. We interviewed 17 NP/PAs over this first week week, sent the Neo team to observe how the pediatric orthopedic team shared information (thanks to Kathy Augustine), and recruited a large number of NP/PAs from outside of Mayo at this NP/PA Internal Medicine Review conference (thanks to Jane Linderbaum).

We also sat down with the Center for Innovation, Sim Center, AME team, in addition to a number of education experts such as Dr. David Cook, Mike O’Brien and Dr. Farrell Lloyd (and more!).

team-discussion.jpg-1024x612INITIAL OBSERVATIONS

We have already seen some interesting patterns from our qualitative research. For example, people told us role of Residents in hospitals has really changed over the last few years, which has impacted NP/PAs. We’ve seen how much learning at Mayo comes from conferences and talks, but also how much knowledge sharing really comes through the relationships that we all develop across Mayo. We have an assumption that healthcare professionals at other organizations struggle with continued education because they don’t have access to all that Mayo can offer. We are working next to prove or disprove that statement.

We asked people about their best learning experiences. A common expected answer was dynamic speakers, but also ranking high were simulation exercises. A surprising and very interesting answer was a story about listening to patients, rather than doctors, talk about their experience.

We’ve heard how people use Ask Mayo Expert versus Up-To-Date (give me a quick, practical answer versus give me the backup research), and how uninspired people are by the current state of online learning. And while it is clear how busy everyone is, we have seen just how much time and effort Mayo employees put into constant learning and being the best practitioners they can be.

PRIORITIES

While much of the week was dedicated to qualitative research, the oversight board met on Friday to discuss and prioritize the critical filters we are going to use to choose what ideas to test. Here are our filters, in our current order:

  1. Testable in the time frame we have (i.e. we believe that we can get meaningful data in 4 weeks of testing, using a “lean startup” approach)
  2. Connection to patient outcomes
  3. Compelling business model
  4. Fit with our market share goal (25% of USA healthcare professionals)
  5. Investment capital required to establish an initial beachhead of success
  6. Clear of overlap with existing units or projects
  7. Unique competitive advantage (confidence that we can be 10x better than the competition)
  8. Confidence that the market timing is right
  9. Team passion

prioritizing.jpg-1024x984

WHAT IS NEXT?

The next two weeks are focused on brainstorming and continued learning about the market, problem space, and existing competitors. We are collecting and investigating ideas as we go along, with the next step of prioritizing our ideas and choosing the first one to test.

So there you have our summary of week 1. Reach out if you have any questions. Our goal is to be as transparent as we can.