As a global design innovation and strategy firm with 45 years of experience in the healthcare space, Worrell has a deep portfolio of medical devices we’ve helped our clients bring to market. Over the past several years however, we’ve seen an interesting shift underway, with an increasing percentage of our design business being driven by a relatively new type of client for us—the pharmaceutical industry.
For many of our pharma clients, their challenge lies in developing meaningful ways to “go beyond the pill,” meaning to apply advanced digital technologies to support, enhance, or even replace a traditional drug regimen.
Examples of this could include virtual reality to manage pain, augmented reality to assist a user during a self-injection, or a voice assistant program to help manage lifestyle factors impacting chronic disease. But, to arrive at each of these “digital therapeutics” it is important to first put them through internal pilot “experiments” to test their viability.
Prospective Data Through Prototypes
It’s incredibly hard for an end-user to accurately predict how they will respond to a new concept (though many are happy to try). This is why the designer’s approach is to leverage behavioral prototypes for user feedback testing. By placing a mock-up, as low- or high-fidelity as it may be, in front of a user/in their hands/in their home/etc., we can actually see an end-user’s true reaction play out prospectively, prompting rich discussion and refinement.
This process of concept prototyping, user testing, and refinement is second-nature to a product designer, or any medical device company for that matter. From a pharma perspective, however, this bite-sized, qualitative approach to research can seem downright novel.
Of course, it’s precisely these types of perspective shifts that jolt new creativity. It is fantastically rewarding for designers to apply our skills to entirely new forms of disease treatment and prevention, influencing the way pharma develops new products for customers. But these collaborations have influenced our practice as well. No better example of this can be found than in how we’re designing small scale prospective design studies (we call them “experiments”). These experiments have a little more structure than your typical early concept testing, a few more Ns in each sample, and a whole lot more data crunching on the back-end. We’ve found that higher-powered experiments with measurable outcomes go a long way in understanding and communicating the critical insights that drive new product design and strategy—especially for pharma clients.
An Example: Managing Diabetes with Alexa
In early 2017, we were eager to learn more about designing voice experiences. To get started, we drew on years of experience in the diabetes space to design an Alexa skill intended to help people manage the day-to-day lifestyle factors influencing their diabetes. We then recruited 16 patients—all naïve to voice assistant technology—to participate in a two-week experiment, where each was given an Amazon Echo device with our beta skill enabled.
Each day, participants interacted with Alexa by giving updates on sleep, diet, and exercise. Alexa responded with tips, goal-tracking accountability, and reminders. We also tested the native messaging functions within Alexa to create a simulated patient community message board.
The outcomes of this first foray into voice design were enlightening to say the least. As designers and developers, we had gotten our first taste of what it’s like to work in this new medium—learning which tools and methods were useful for drafting, editing, prototyping, testing, and debugging; seeing how long to allow for each stage; etc. But more importantly, from a patient standpoint, we started to gain valuable insights around how, when, where, and why this tool may have utility.
For instance, our participants responded very favorably to the Alexa-supported goal-setting and goal-tracking features. They also engaged regularly with the symptom tracking and community messages. Many of them told us directly that this was because they quickly found themselves thinking of Alexa as a “person” (friend, confidante, helper, nurse, etc.). This was a powerful insight as it opened up untold possibilities for how to apply voice technology to support patients.
Of course, we learned some tough lessons as well. For example, going through a voice-based survey each day quickly grew as tiresome as it sounds. What’s more, when the technology would fail, as new technology is wont to sometimes do, participants’ frustrations levels would spike and their confidence levels would immediately sink. It seemed Alexa wasn’t granted nearly the leeway to make mistakes that one might extend to an actual human.
The Basics for Experiment Design
There’s no one way to design an experiment, but our experience has taught us that there are three key points to getting the most meaning out of your measurable outcomes:
1. Start Early. It’s almost never too soon to experiment. Even a low-fidelity prototype can be shared with users to provide early feedback. An example of this might be a sketched storyboard that describes a new service scenario. Key features and benefits are communicated at a glance, clearly and consistently, prompting more insightful features to guide next-stage refinement. Too often clients can struggle with making “perfect” the enemy of the “good.” Experimenting with early concepts can help cut through front-end ambiguity to clarify design direction quickly.
2. Power Up. In developing new products to bring to market, the FDA guidance for human factors advises five to seven users per user group for formative testing, and 15 for summative. In fact, this is basically modeled after a traditional design approach to iterative testing and refinement. However, powering up an experiment just a few notches to 30 to 35 participants yields more statistically reliable insights. This can go a long way when uncertainty is high and data is your audience’s love language.
3. It’s a Sprint, not a Marathon. Run an experiment as long as needed to get the data-driven insights you’re after (30-day re-order, 90-day A1C reduction, etc.), but try and keep it lean. If anything, it’s better to run a shorter experiment, then refine and repeat. Furthermore, consider how you might gather data as you go. We’ve run experiments as short as 24 hours with great results, and two-week experiments where data analysis was happening on a daily basis. The point: Experiments should not slow down your development timeline. In fact, they should save you time (and cost) by revealing potential risks and challenges earlier.
Ultimately, a human-centered design process that emphasizes early concept prototypes, iterative design, and prospective user testing with measurable outcomes is still a proven approach to help discover new and better ways to deliver care.