Much like the landscape of data collection and analytics, specialty products are unique and complex. These complexities impact the ability to understand the patient’s journey. Most patients interact with multiple sites of care, from pharmacies, patient service hubs, and primary provider offices, to alternate sites of care, such as infusion centers, hospitals, and clinics. This disparity of location type typically results in fragmented data due to the way in which data is captured and procured.
As such, an incomplete view of patient care can be expected. Unfortunately, differences that exist in the data elements between sources can lead to the arduous task of piecing together the patient’s path when specialty providers are not providing all data elements needed to link patients, their healthcare providers (HCPs), and payers.
Linkage of patient data introduces the opportunity to include not only additional patient characteristics such as social determinants of health, but also more clinically rich data such as lab results, patient adherence, and other longitudinal behaviors. Clear linkage of patient touch points and pathways across time and source are a must, as is the need for consistency of common identifiers that can be used for patient linkage.
Additionally, as each added piece of patient data that is linked creates elevated privacy risk per HIPAA, as the more that is known about a de-identified patient, the greater the risk of re-identification.
These types of complexities affect the insights needed for key commercial activities, such as finding and understanding a patient’s treatment pathway, and will limit the ability to identify new patients, continuing patients, or lost patients to competitors. When these problems arise, answering specific questions can be unclear due to incomplete data and duplication of data when it is integrated from various sources. Patient program and services return on investment then tends to rely on the service providers who are reporting their value versus an organization’s internally calculated value.
Through our experience, IQVIA has found that the use of a single partner for data aggregation and integration can streamline processing while reducing errors and time to insights. We have further performed a re-identification risk determination and implemented a de-identification strategy that ensures a minimum probability of re-identification, thereby allowing the client to receive the data and perform any number of analytics.
In this two-part series, we will walk through patient journey challenges and success factors, as well as detail how to apply a structured approach for aggregation and integration services. The series will also focus on leveraging of patient data, use cases, and share a case study that highlights the importance of data aggregation continuity.
Several critical success factors must be considered for patient insights to reach a best-in-class level:
Data aggregation coupled with integration services simplifies the path to understanding patients by providing robust, complete data. Using traditional aggregated data along with a third-party partner and/or syndicated data can create a single longitudinal patient journey. Traditional specialty data aggregation services tend to include contracted pharmacy, hub, non-commercial pharmacy, and co-pay data, while information related to lab, retail claims, medical claims, remittance, and electronic medical records data are typically available from third-party and/or syndicated data providers.
The key to enabling patient data integration is use of a singular patient tokenization, de-identification, and patient-matching methodology to create a longitudinal data view through one comprehensive, linked, and fully integrated dataset. An effective data aggregation and integration framework consists of several elements:
Time-to-insights is reduced by eliminating the risk of multiple tokenization engines being used between partners, as well as multiple partners staging and processing the aggregated data. There’s also less client coordination needed when one partner is utilized and decreased costs because data does not require processing through multiple service providers. Additionally, different data aggregators use different tokenization software and service providers to support tokenization. The key to consistently being able to link data longitudinally within an aggregation platform is to use a consistent tokenization engine so that longitudinally across time and source can be achieved. Attempting to take aggregated data and connecting it with other third-party reference data will likely cause data to be re-tokenized, de-identified, and matched. This can cause significant overlap and inefficiencies.
We dug in deeper to this topic at the recent Fusion 2021 conference; if you missed this year’s sessions, click here to watch them on demand!
This series of on-demand videos will show you how making better data connections can uncover new opportunities with greater insights so that you can make more informed, confident decisions spanning: