Data Overload—The Best Ways to Manage and Interpret Data

The amount of data now available to life sciences companies can be overwhelming. The good news is IDC Health Insights predicts that by 2022 30% of life sciences organizations will have achieved data excellence, which is defined as “the concept of effectively using the right data at the right time.” But what are the keys to helping your organization reach that goal? And what can be done right now to help you better manage, store, and analyze all of the data that you currently have and are working to collect? To help you with all of your data-related challenges, PM360 asked 10 experts:

  • Healthcare and the life sciences industry is generating a tremendous amount of data, but it is not always clean, structured, and compatible. What are the keys and best practices for optimal data governance? How can life sciences companies ensure the data they capture is accurate, well organized, consistent with their systems, relevant to what they need, compliant with all regulations, in a usable format, and not corrupted?
  • Besides capturing data themselves, what are the best options for life sciences marketers to obtain the data that most suits their specific needs? What are the best data aggregators to use? What types of partnerships work best? What sources generally provide the best data?
  • What are the best ways to store any data that companies collect or obtain? What are the advantages and disadvantages of using data warehouses, the cloud, internal systems, etc.? How can companies best arrange their systems so data is not siloed and can be accessed by whoever needs it?
  • How can companies ensure they are generating the right insights from the data and delivering it to the people who need it in a way they can understand? How do you determine the optimal way to visualize the data so it is easy to read? What other tips can you offer to make sure the people given the data are not overwhelmed by it or quick to dismiss it?
  • What other issues or problems should companies be aware of when it comes to data? What advice can you offer to solve those issues or prevent them from occurring?

Corinne Yaouanq-Lyngberg

The good news is that availability of data, especially related to patients’ health and lifestyle, is growing rapidly. Underlying infrastructure and dedicated data scientists must be in place to explore, mine, and test these new data sets to unveil insights and connections between behavior change and health outcomes. Driving new insights will come via step-wise combination of data modeling and experimentation that when tested in real life will validate the initial hypothesis.

Mapping the full analytics roadmap and executing on it may very quickly become outdated and a waste of an organization’s investments. Whether organizations try it out by building internal expertise or by purchasing external know-how, the approach remains the same: Take a “design thinking” approach of understanding your customer needs, developing a prototype, and testing it in real-life setting before scaling up the solution.

Tips for Optimal Visualization

In any dashboard, less is more. At Novo Nordisk we start with the rule of no more than two to three metrics for each promotional channel for better end-user experience. Our stakeholders expect to get a full overview of their promotional landscape across different geographies in one or two clicks at most. Speed is also a deal-breaker and should remain the #1 priority as you build a self-guided analytical tool. Internal stakeholders’ expectations are exactly the same as the ones you would have when surfing the web: Speed, few clicks, and everything in one place.

Another critical success factor is coming up with key leading performance indicators (KPIs) that demonstrate some business impact. This process should be accomplished based on empirical evidence and recommended by data scientists. So, before you embark on developing a visualization tool, make sure sales and marketing aligns with commercial operations on what metrics should be prioritized.

Jean Drouin

As pharmaceutical and biotechnology brand teams refine their approach to launching products and tracking market performance, they are asking more nuanced questions than ever before. They are using more narrow inclusion and exclusion criteria to identify precise patient segments, and a vast number of indication-specific metrics to rank HCPs, in the hopes of surfacing commercial insights that will help them optimize marketing and sales resources.

Refined Commercial Insights

However, the traditional analytics that commercial teams rely on often leave blind spots about patients and HCPs that can only be filled by exploring micro-level trends in vast amounts of patient-level data. The legacy model of buying disparate data (which is often unstructured, dirty, and expensive), and analyzing it with manual tools and cumbersome customizations is onerous and time-consuming. It is a far cry from the big data efficiencies that the banking and consumer industries have established. Today, brand teams want to quickly and precisely uncover patients who were misdiagnosed, predict patients who are at high risk of nonadherence, understand why an HCP might have switched a patient’s therapy, and learn when a patient may need to start a second-line treatment.

The new standard in advanced analytics for life sciences commercial teams delivers precision, speed, and productivity. Highly automated platforms, with massive, longitudinal, patient-level data sets can be queried to deliver actionable real-world insights in seconds. The data sets are continuously refreshed and cleaned to identify outliers, distinguish between unmarked screening and treatment diagnoses, attribute physicians to patients, and sort claims into appropriate specialty- or disease-related categories. Using machine learning and AI, the data is sequenced to reflect the order of events in each patient’s care journey to allow for seemingly endless analyses. These advancements allow brand teams to be more productive and get therapies in the hands of patients faster.

Jason Stephani

Data governance is a journey, not a destination, and an optimal process will build on some form of the following five building blocks:

1. Organizational Structure: A data governance council needs to be established with representation from stakeholders across all functional areas, such as Finance, IT, Commercial Operations, Sales & Marketing, Clinical, Research, and Supply Chain. This cross-functional council is led by a data governance lead who directs and aligns the mandate, strategy, goals, and scope of the operational program. For example, a sample goal would be, “enable trusted, controlled, compliant, robust, and accessible data to drive strategic insights.”

2. Data Standards: Data quality begins with establishing standards for data sources combined with a rigorous ingestion process for structured, semi-structured, and unstructured data. The ingestion process is the sentry for new data, and through active monitoring enables an organization to identify the data variety and veracity.

3. Awareness: A communication plan for developing awareness and an understanding of data governance tools and processes is a necessity. Data consumers should clearly understand the primary contact responsible for a given process or source.

4. Operational Plan and Processes: The data governance operational plan should include sections on data strategy, data management, and data infrastructure. Develop standard operating procedures prioritized to solve problems that align with business-critical issues, and tailored to execute efficiently (e.g., decision rights, access management, data monitoring and usage, change management, and business rule definition).

5. Goal Driven: Create data governance KPIs that are measured and monitored in a scorecard. Sample KPIs include: data assets under review, onboarded or retired, data assets categorized or included in enterprise data governance application, source connectedness, or use cases by function.

John Chinnici

When it comes to storing data, I suggest companies look at this with respect to two layers: the application layer and the data management layer.

For the application layer, the ultimate goal is getting data as close to the point of decision as possible. Usually, this means aligning data applications specifically to the functional roles and business questions they need to answer (e.g., brand, sales, clinical, quality, etc.). For example, the most current and accurate customer data should be available in a field rep’s CRM tool or the statistics around content consumption should be easily accessible to a brand manager as they make planning decisions.

The data management layer is different. The data management layer must provide both a consistent set of data across applications as well as have the flexibility to adapt and evolve as business questions change or new data sources become available. This is traditionally where data warehouses, data lakes, etc. have come in, providing a “single version of the truth” for data regardless of who’s using it.

Advantages of the Cloud

The cloud is actually providing some great innovations in this area. The cloud is helping to provide more speed and flexibility in this underlying layer. By no longer requiring companies to manage their own complete data management infrastructure (e.g., servers, software, etc.) companies can scale up and down very rapidly.

Additionally, in some cases the cloud is providing benefits beyond basic hosting. Some companies are now looking at creating economies of scale across the industry. Modern cloud data management solutions are doing things such as standardizing data structures or third-party integrations and making all of those updates available to all their customers at the same time automatically. Broadly, this is increasing data accessibility and, in a sense, helping to democratize the data management layer.

Robert Gabruk

The three core principles that help convert data into actionable insights are:

1. Respecting the audience and understanding their “What’s In It For Me” (WIIFM). Often, data storytelling addresses a broad range of stakeholders, such as brand leaders, marketing managers, sales leaders, and, of course, data scientists, each with different objectives and experiences. Because of this, the concept of “WIIFM” is central to delivering the right insights to the right stakeholders. The story should be based on their need to know, not your need to inform.

2. Crafting a compelling introduction to the story by applying Situation, Complications, Question (SCQ). “The situation” refers to the context surrounding the challenge at hand. “The complications” are the levers that have triggered a change in the situation. And, finally, “the question” created by the complication, such as: “What are we trying to accomplish with this story?” By creating a compelling SCQ from the outset, establishing the brand context becomes a less challenging endeavor.

3. Applying a logical framework that will facilitate content absorption and retention. Data analytics can be complex, so you need to structure the insights into a digestible, overarching storyboard. The Minto Pyramid Principle often guides my approach to data storytelling. The principle predicates that ideas and thinking are more likely to be understood and received when organized and presented as a pyramid under a single point.

Additionally, patterns, trends, and outliers in the data are often easier to detect when presented visually. Best practices for data visualization include:

  • Right data: Include only the data needed—superfluous data only complicates the interpretation.
  • Right chart: Select the appropriate graphic. Typically, one needs to visualize five basic types of data: percentage of total, ranking of items, changes over time, frequencies of occurrence, and correlations between variables.
  • Right takeaway: Ensure that the message is clear.

Doug Fulling

Partnering with a data aggregator is often the fastest route to access conventional and unconventional data sources to solve business problems. Look for four characteristics in a partner:

1. Foundation: Evaluate the partner’s technology, privacy, and security capabilities, as well as the freshness and breadth of their data. These fundamentals are critical to a reliable, ongoing data supply.

2. Analytics Experience: A good partner will excel in searching for new data patterns and correlations and use advanced analytics and machine learning to provide better insights.

3. Innovation: The best partners have experience in various industries and consult to combine and link diverse, novel, and curated data sources while maintaining privacy compliance. For example, they can evaluate whether capturing data directly from patients, physicians, and others will help you gain a competitive advantage.

4. People: Finally, strong, outcomes-driven people on your partner team are the basis for shaping raw data into tangible, actionable outcomes that drive healthcare improvements.

Getting Insights from Data

Organizations have more data choices than ever, but too much data can cause an unclear result in the same way as too little data. Striking the right balance is key. Be clear about the problem you are trying to solve to derive meaningful insights from data. To ensure faster, more insightful results from the analytics team, take time to dialogue with the end user and the final audience to determine the business issue.

Once data is returned, tell a story. Express findings concisely and show a direct correlation to the business question. The story should also provide context to the data—explaining what it means, why it is important, and how it will affect the business.

Piotr Kula

To generate insights from the large amount of data available, commercial teams should empower a governing committee to develop guidelines for data management. This cross-functional team—representing data analysts, IT and operations, and business users—can make technology recommendations and establish data onboarding and maintenance processes to ensure compliance and enable achieving analytics goals.

Historically, functional groups operated in siloes for data management. IT would purchase technology and tools, data analysts would develop reports, and business users would mine insights. Ownership of data maintenance was often poorly defined, and as a result, companies ended up with inconsistent data, duplicate entries, and missing information. Some companies centralized data management responsibilities, but this group often lacked visibility into the needs of business users and other stakeholders.

A cross-functional team enables a comprehensive execution of the company’s data strategy. This group can help facilitate new data asset acquisitions, define the golden record for each physician, and ensure business users have access to data they can trust. First, the team should determine which data asset and technology purchases align with company objectives. Then, they should define data management strategies to ensure those investments can be leveraged effectively.

How to Ensure Data is Useful and Digestible

First, companies should determine what they hope to accomplish with their data. That will inform how the data should be ingested and processed. Companies should tailor access to data and tools based on user needs. For example, data stewards may need to update data ingestion pipelines, while business analysts only need access to analytics tools and published data. Business users should be informed of refresh schedules and general business rules, but may not need to know all the details of vendor-specific processing. Tailored access improves user experience, reduces duplication of efforts, and helps generate valuable insights faster.

Chris Sigley

In the management of clinical trial data, digitization has already become the norm in the form of electronic trial master file (eTMF). It is also becoming the norm for the archived trial master file, which must keep trial data ready for regulatory inspection for a minimum of 25 years.

While a positive transition from paper to digital is clearly underway, a recent survey of more than 200 life sciences professionals (conducted in July 2020 for Arkivum) identified fragility in certain aspects of the management of TMF data. It found that 38% of clinical trial sponsors and 45% of clinical research organizations (CROs) struggle to manage TMF data. Meanwhile, 38% of sponsors describe their ability to access data and records from their TMF archive as “extremely inadequate.” This figure rises to 65% among QA, compliance, legal, and regulatory professionals. The survey also found less than half of respondents are using digital archives that offer sufficient functionality to assure long-term inspection-readiness while also enhancing the trial sponsor’s ability to innovate—for instance by extending the lifecycle and commercial scope of approved medicines through new indications and formulations or licensing and partnership opportunities.

The Value of Data Archives

As new technologies, such as AI rapidly gain traction, and as data management becomes still more sophisticated, digitized clinical-trial data that has been archived in accordance with the FAIR principles (Findable, Accessible, Interoperable, Re-usable) can hold scientific and commercial potential that might otherwise remain untapped. Beyond meeting regulatory requirements, an indexed, searchable, interoperable, discoverable digital archive can become a knowledge repository for an entire organization. It can create efficiencies, enhance collaboration, and avoid duplication of previous effort when the search is underway for new applications and indications for an existing treatment. Ultimately, long-term access to reliable, well-stewarded data is essential to life sciences.

Andy De

Life sciences companies need to have a compelling and future-proof analytics and AI strategy that must approach AI and analytics as one holistic initiative versus two discrete initiatives. They should have a clear vision with well-defined goals, objectives, and metrics not only for their tactical needs in terms of descriptive analytics but also their strategic initiatives from a predictive and prescriptive analytics perspective. The key is to think through these goals holistically across the data, process, and people pillars and select a best-of-breed, time-tested platform that lends itself to fulfill the needs of analysts in a code-free way and through code-friendly capabilities to the data scientists within the organization.

Considerations for Choosing a Data Platform

I’d suggest they also have a clear perspective on the AI and analytics needs of the business and lines of businesses (LOBs) such as integrated sales and marketing, commercial manufacturing, supply chain management, clinical data management, and R&D, rather than positioning this as an “IT challenge” and solution alone. It’s essential companies understand the descriptive and predictive analytics needs within each LOB at a use case and person level to ensure the platform, solutions, and services can meet their needs holistically, versus point solutions for each use case or LOB which would undoubtedly be unsuccessful.

They also need a solid understanding of how the platform will help address their needs from a Governance, Risk, and Compliance (GRC) perspective, aligned with the regulations governing their industry. And it is important any solution not only meets the needs of the organization today but also five to 10 years from now. Lastly, it’s crucial they deliver a proven platform to the enterprise that will be easy to deploy, adopt, and enable users and consumers to rapidly learn—ensuring quick time-to-value and ROI in their investments.

Sam Johnson

Not all data is relevant. Collecting data for collection’s sake is a common mistake that leads to faulty assumptions and hypotheses about the value of that data or the insights to be mined from it. Defining the business problem or goal the collected data is directly tied to measuring is critical to success.

Data as a Cheeseburger

A good analogy is building and defining a cheeseburger: What ingredients do we, as an organization, agree we need to define our cheeseburger? Are we minimalist traditionalists who believe the bun, the patty, and a slice of cheese defines our cheeseburger? What about the burger with lettuce, tomato, cheese, grilled mushrooms, onions, avocado, barbecue sauce, ketchup, mayonnaise, mustard, jalapenos, and Thousand Island dressing? Do we really need those ingredients to identify our cheeseburger? What are we gaining by defining the cheeseburger this way? Many organizations collect similar irrelevant data that contextually has no value in determining its effect or impact on their business.

Consider the relevance of the data you’re capturing from the perspective of context and then, when you can make a direct connection to a business metric or problem’s solution, collect it. If the data list is long and you can’t tie any of it to a business goal or problem or even to other elements in the list, drop it or put it in a holding pen for future review. Remember, relevance is a human expertise question that requires a critical eye and an ability to not get lost in “big data” for big data’s sake.

Ads

You May Also Like

Klick Health Launches First ChatGPT Plugin for Life Sciences Industry in U.S.

Life sciences professionals can now easily and quickly obtain industry payments to any healthcare ...

Real-World Imaging Can Help Address Drug Development Delays Due to COVID-19

The COVID-19 pandemic has created long-term disruptions for drug development programs. Clinical research visits ...