The past few years have seen rapid improvements to the most common analytics solutions used in the pharmaceutical industry. We can now deliver more compelling experiences to our business users than ever before—satisfying their voracious appetite for data—big and small.
And yet, in spite of the urgency and excitement, companies are not realizing the full potential of their investments in these solutions. Research has shown that successful adoption is the major driver for return on investment (ROI) of technology solutions. But low adoption is a perpetual challenge for all types of enterprise systems, analytics solutions included.
As the investment levels continue to grow in response to the demand for analytics, we can ensure that our strategy—and the accompanying technology investments—deliver the intended value and impact by creating compelling, usable solutions for our target audience.
Users Need Proper Motivation to Use New Technology
Adoption issues are ultimately about the inability to achieve the expected change in end-users’ behavior in response to new technologies. Knowing that adoption is a critical driver for ROI, how do you get people to actually change how they are working, including adapting their current work practices to the new technology? In an ideal state, you should be thinking about that before you build the system, not after.
The work of behavior scientist and Stanford professor BJ Fogg reminds us that for end-users to adopt a solution, they need a motivation, the ability, and a trigger. Only with all three elements in place can we expect to see behavior change. Analytics solutions can provide the ability (access to the right data in an analytics solution) and the trigger (alerts or notifications of some kind); these are typical parts of any implementation. Unfortunately, only a solution truly grounded in an understanding of user (or customer or patient) needs can ensure that the end-users’ motivation is accounted for from the beginning.
Here is a simple, patient-centered analogy: If a patient has diabetes, it may be tempting to provide them with a device or a connected app that helps them track their blood sugar levels. But if the patient has no desire (or is unwilling) to change their eating and exercise habits, is there really value in providing them with the new technology? The same challenges occur on a much larger scale with enterprise technology investments—we assume that technology accompanied by launch communications and training will naturally drive behavior change, but that is not the case. It is challenging to drive that change if the proper motivation hasn’t been accounted for in the business model or the solution design up front.
Companies Must Focus on More Than Vanity Metrics
Analytics can help to change behavior, but perhaps not in the ways that you would expect; you need analytics about your solution.
Surprisingly few companies think about how to measure the value and impact of their solutions before the implementation begins. In order to drive change—either before or after an implementation—there has to be a clear, common understanding of the intended impact of a new solution. Even solutions (such as CRM) that we take for granted today in sales and marketing should have a clear business goal or outcome they are intended to achieve, such as sales rep compliance or managing marketing campaign budgets.
In many cases, usage statistics (usage volume, number of unique visitors) are the only numbers being collected. Unfortunately these “vanity metrics” do little to inform business decision-making, including change management efforts. In an ideal situation, there should be three levels of measurement:
- Usage statistics includes basic information about traffic, unique visitors, length of stay, and devices or browsers used. This has some value to understand the number of concurrent users and to ensure servers are sized correctly, and it may help identify changes to front-end code based on browsers in use. This data may also tell you whether each target user is accessing the system—but it doesn’t really tell you whether those interactions are meaningful or not.
- End-user behavior includes data related to usability, including efficiency and effectiveness of task completion, plus satisfaction. The goal is to understand how/if the tool supports the work. Good analytics can tell you, for example, whether a brand team can effectively locate collateral in a document management system to launch a campaign. Useful analytics can help you understand how time-consuming it is for a sales rep to put an opportunity into CRM, or to update their physician’s contact details. Rather than tracking “vanity metrics”—“Oh, usage of our reporting spikes on Monday!” (when weekly sales data is released) or “Oh, look at all the traffic to our new website!” (because a campaign email was just sent)—we should be asking, “Why are these reports not being used?” or “Why do so many users fail to complete the ordering process?” or “Why do the users always look here for XYZ information, rather than where we expect them to look?” In other words, the analytics can also tell us about the behavior of the target users. Understanding those patterns will lead to solutions that are better aligned with how they want to work, and ultimately drive better outcomes for our technology investments.
- Business Impact metrics are data related to business outcomes enabled by the solution (e.g., improved compliance reporting, more timely tracking of campaign spend, etc.). For a new medical device launch, it might be important to know how many physicians have registered on the website to place orders. What is the typical timeline for reimbursements? Are there returns? Or are there adverse events being reported? It is harder to drive the right focus with executives around “getting more users into the system on a daily basis” than it is to say, “We’re failing to meet the target number of orders we had established as part of our project charter.”
For your implementation to drive business value and impact, the deployment of any solution should start with the end in mind. What business outcomes do we expect from a new solution and how will those be measured? What information needs to be tracked to reduce risk or ensure proactive response to changes in the market?
In other words, in order to drive behavior change you need the right metrics in the system and the right metrics about a system. This data is only useful to the extent that it informs the progress on a business strategy. Establishing and tracking the right metrics makes it easy to create urgency around solving both business risks and adoption challenges, should they arise.
Align Business Goals with User Behavior
In the enterprise analytics space today, typical implementations don’t focus on people and their motivations or on proper KPI development and tracking. But if you have a live system that is not creating the expected change, start where you are. A good first step is to understand the gap between business expectations and actual behavior.
Training communications (or any kind of change management program for that matter) have to directly address the gap between the as-is and the to-be state. There are two simple questions that the project team can ask to help identify the problem areas, focus the discussion, and inform next steps:
- What is the business trying to achieve with this new system? There is an underlying business goal or outcome that needs to be explicit and well understood by the team, such as increased orders, more satisfied customers, better market coverage or increased click-through rates.
- How do we ensure good alignment between those business goals and the end-users of the system? There needs to be an explicit focus on how the customer or end-user is working today and how the new system is affecting their work activities. Something harder to get at (but equally important) is what motivates those users. The most obvious cases are financial drivers; sales reps will adjust their behavior in response to a well-defined compensation plan. Similarly, patients are more likely to buy with an appealing rebate. But other drivers may be more subtle, such as a desire for recognition, promotion, or learning. Oftentimes, these are not questions that can be answered by a project team in isolation; getting meaningful answers may require the skills of a User Experience researcher or a behavior scientist or change management expert. However, those skills may not be present in a highly technical team.
Once the gap is well understood, the project team needs to revisit the technology implementation itself. Does the solution effectively balance business needs, technical constraints, and the needs of end-users? In complex enterprise implementations, it’s typically people’s needs that are short-changed. So, the most important question is almost always what trade-offs can be made in favor of the end-user or customer (that would lead to better adoption).
Driving behavior change and ultimately adoption is not about more training or better training or more communication. When faced with adoption challenges, many teams seek to reinforce the messages and the features that have already been defined. The reality is that fixing adoption challenges after the fact is usually not cheap—features, functions, and workflows in the solution may need to be changed to make end-users more efficient and effective at their work. The good news is that users who have been engaged in these key decisions and who are able to complete their work with minimal challenges are inherently more satisfied with the system, and more likely to adopt it.
It may seem like an onerous task to revisit these issues after going live. It can also be significantly more expensive. Thus, it’s a much better use of time and resources to articulate the business strategy and end-users (or customer or patient) needs from the beginning. In that way you can help your company realize the value and impact of the solution through higher rates of adoption.