Last Tuesday, Nov 17th, the innovation team held an open house in Rochester. Over 3 hours, we estimate roughly 200 people came through. The energy was high. The discussions were great. Below is a virtual tour of the content of the event for those who could not attend.
INNOVATION LAB OPEN HOUSE TOUR
We have captured the wall displays from the open house, with some explanation for each. Given the layout of the room, the flow actually went counter-clockwise, so when a photograph has multiple components, it is likely working right-to-left.
Part 1: The purpose of the current 16 week project is to help Mayo Clinic decide whether it wants to invest in an ongoing innovation studio for the Education Shield. Do these methods work for Mayo Clinic culture? Do the kinds of people Mayo Clinic might hire fit with the organizational DNA? Our general ethos is always to test an idea with a smaller amount of money before investing a large amount of money, and we are doing it with the innovation studio itself, not just the products it works on.
Another key part of our ethos is action over discussion. We need to test out the innovation studio by doing the work, because that is the only way to get real data for a decision.
We also reminded people of the high-level goals for the innovation studio, set at the start of the project and which remain a useful north star for our decision making. The big three: we wanted to have a direct connection to patient outcomes through our educational efforts; we wanted whatever we created to impact 25% or more of healthcare professionals in the USA; we wanted whatever we created to be breakeven or better by year 3.
Part 2 and 3: The team consists of two major parts: 1. the board, which provides accountability, strategic alignment, and resources; 2. the team, which chooses the ideas and does the actual execution. The team is cross-functional, composed of design, engineering and strategy, and led by an entrepreneurial lead. There is actually a third major component to the team not shown above, which is the expert advisors we surround ourselves with. These people are an essential source of inspiration, reality-checking, and domain expertise.
On the left, you will see a rough process flow which describes the arc of the work over the first 8-9 weeks: we learned about Mayo Clinic and healthcare professionals, we ideated and chose an idea, we broke out our key assumptions and risks and tested the key ones, we built a "minimum viable product" and then synthesized our results. The key point we made to people when showing this diagram is that, while the illustration looks linear, it is really a whole bunch of really tight loops. At the early stage, we try to wrap everything within "learn, then build, then measure, then repeat" cycles.
Part 4: Here are a few key ways we work and think. Action over meetings. Outcomes (results) over output (deliverables, features). Data-informed, not data-driven. Scrappy doesn't mean crappy. The items on that wall mean a lot to the team.
Part 5: We kicked off the project in week 1 with an inception and a huge amount of customer development (1-on-1 qualitative research). We had an early hypothesis that NP/PAs would be an important early customer of ours, and spoke to about 30 of them. However, we also have tried not to limit ourselves too tightly, talking to nurses, doctors, residents/fellows and more. On the left of the photograph, you can see some of the "dump and sort" exercises we did to make sense of patterns we saw.
Part 6: Another essential step during week 1 (any beyond) was to talk to as many experts as we could. We visited CFI and the Sim Center. We dug into Dr. David Cook's research into learning methods and explored the innovative work of Dr. David Farley and Dr. Farrell Lloyd. The list of people to thank was quite long, and we have really valued the generosity of many people at Mayo Clinic.
Part 7: In weeks 2 and 3, we tried to get lots of ideas out of people's heads. This was done with a mix of structured and unstructured time. One of the structured exercises we like is called a "design studio" (or a "charrette"), in which the team picks a topic and then individuals sketch as many ideas as possible can within 5 minutes. We then converge around and refine the best of those ideas. Examples of topic areas were: "how could we take Ask Mayo Expert to the rest of the world?", "How can we get around the scaling limitations of a physical Sim center?"
However, we also believe in unstructured time for creativity. Each member of the team was encouraged to explore areas of interest and bring back their research and ideas to the rest of the team.
The last part of ideation was to filter our ideas. Most ideas dropped to the cutting room floor, as you would expect, but we had 6 that survived. We had 9 key filters for prioritizing ideas, which range from our ability to test it in 4 weeks (important for the purposes of this 16-week trial), to the direct connection to our north star principles.
Out of those 6, we actually ended up in a tug of war between two ideas, and so to tie-break, we challenged the team to pick either idea and "stretch" it, to take it to the next level. Out of this came the idea we ended up choosing: Cases.
Cases was (is) a mobile video case learning and discussion platform. It allowed people to create an invite-only peer learning group, and then share short video cases and short video responses to those cases, all on their smartphones.
We sketched out how people learn from cases today, and what a new approach might be utilizing smartphones.
The next essential step was to break out our assumptions around the idea: not just what the product was, but who it was for, what their goals were, how we acquired customers, how we made money, etc.
Then we asked a critical question: what assumptions do we have, that if proven wrong, would cause failure? Out of these assumptions we can spot our big risks: things that feel highly impactful and also highly uncertain. From here, we started sketching experiments.
At this point, we are at the end of week 4.
Part 8: The team ran a number of experiments around Cases, some big, some very small. For example, we didn't know whether participants would be able to easily create short video cases. For this, we ambushed a few (targeted) people in the halls of Mayo Clinic and asked them to try to record an interesting 1 to 2 minute case, right there on the spot. And people were able to do it!
During this time, we also did a lot of market and competitive research. We learned that there was a lot of case-sharing activity on the Internet already, but mostly in the form of text discussion boards, or "virtual patient" engines.
No one seemed to be exploring video or private peer groups. Was this because it was a bad idea, or because it hadn't been considered?
Part 9: One of the concepts we wanted to get across is what we refer to as the "truth curve," highlighted in Giff Constable's book Talking to Humans. The essential point is that you only get indisputable proof about an idea once you have a live product in the market -- either people are using it, or buying it, or they are not. However, you should not wait until that point. You can gather insights and data far earlier, but it requires using your judgement to interpret results.
Part 10: We believed that we could very quickly create a working prototype of Cases, and truly test it out in people's hands. Eric Ries coined a phrase "minimum viable product," and our interpretation of what MVP means is: 1. the smallest thing you can make, 2. that you hope is usable, desirable and valuable, 3. and which feels like a product to the user (even if it isn't real behind the scenes).
Part 11: One of things our designer did when designer the Cases MVP was map out the "user journey" in terms of people goals and possible actions that mapped to those goals. However, the important thing to note is that we did not attempt to implement all of these features or view this as a list of "requirements." Rather, we cherry picked the most essential elements that felt minimally required.
Part 12: In early experiments, we often fake a product and run the "back end" manually. These are called "wizard of oz" experiments for obvious reasons! However, for Cases, we didn't need to fake the back end. Using a combination of custom software code, open source frameworks, and cloud services for infrastructure, we were able to make a fully functional version of the product in under a week.
At this point we were around the end of week 6. During that week, we had also recruited our initial testing groups for the product.
Part 13: When we run an MVP experiment, we have to be willing to watch our experiment fail, but that does not mean we want it to fail, or that we are willing to let it fail for silly reasons. We want to test VALUE, not usability!
When we first put Cases in people's hands, we hit some immediate usability issues (as one usually does). Our first onboarding flow was confusing, so we fixed that in about an hour and pushed a new version live. We also had a problem where people were opening up their email invitations on their desktops, but they could not really run the application on their desktop. So we quickly added a feature that let them SMS a link to the application from their desktop to their phone. That seemed to fix the problem.
Part 14: We went into our MVP experiment with some quantitative pass/fail targets. We instrumented the product to ensure that we could track how it was being used, and if we were hitting our targets.
In the photograph above, you can see the results we were seeing from each team running with the product. The bulk of the data came from weeks 7 and 8.
We had also taken the MVP out to 9 other healthcare institutions outside of Mayo Clinic. While we generated interest, we did not get any true takers, which was a bad early sign in terms of urgency of problem and our initial value proposition.
Part 15: In week 9, we synthesized our results. Interestingly, we kept on having new groups hear about Cases and ask to try it out. We are currently watching how they are using the platform.
However, our overall conclusion from qualitative and quantitative data is that video responses provided too high a barrier to entry for users. Video did not fit into their daily habits and was too public for many. However, the appetite to capture of the value of case discussions appeared to be high amongst groups and is worth exploring further.
Part 16: For the purposes of this 16-week trial project, we are not going to iterate Cases, even though there are interesting directions to take it, but instead are going to explore a new new idea. We asked attendees of the open house to write down *their* ideas for where we might go next. Above is what that board looked like at the end of the day. Lots of food for thought for our team!
Part 17: We also asked attendees to vote on a whiteboard as they exited the open house. We asked two questions: 1. do you like how we are approaching innovation? 2. Do you think Mayo Clinic should invest in education innovation?
The signal was clear that people thought Mayo Clinic should invest in education. It was largely positive, but not universally so, towards our approach as well. Frankly, we really appreciate that honesty. We know we are not perfect, but our entire approach is to do our best, learn as we go, continually improve, and be transparent as we do so.
That concluded our open house. We hope you have enjoyed this post-even walk through, and as always, send comments our way.
The last part of last week (our week 10) was focused on choosing our next problem space. You'll hear more about that next week!