How Unstructured Data is Cleaned
A vast amount of healthcare data is unstructured, in a format that ordinary information systems cannot digest. Electronic health records are a good example; another is membership roster data from IDNs and GPOs. In total, about 80 percent of healthcare data is unstructured, according to Condusiv Technologies.
And, as Dr. S Pak, chief medical officer at 3M HIS points out, “most pharmaceuticals companies have only manual processes available for analysing unstructured data. This manual processing is both time-consuming and expensive.” Enabling access to unstructured data, Pak adds, “is more essential than ever as we attempt to measure value and improve patient outcomes in the shift to precision medicine and value-based care.”
Unstructured data sets are in the terabyte range and are expected to reach petabyte scales. The healthcare industry must transition to making use of it,” says Condusiv Technologies CEO Jim D’Arezzo.
Unstructured Data is difficult
Unstructured data is a little more difficult. Unstructured data comes in many forms including but not limited to emails, audio files, videos, text documents, genome files and social media posts. Unstructured data is undefined and can’t be analyzed the same way as structured data, which is why it’s much harder for healthcare organizations to make unstructured data actionable.
“Not only do organizations need tools to look back at the legacy data they have already stored, but they also have to deal with increasing amount of data being produced every day. With the addition of connected medical and Internet of Things (IoT) devices. Organizations are collecting unstructured data at an alarming rate, comments Cloudera Founder and Chief Strategy Officer Mike Olsen.
”We believe that machine learning and analytics are powerful tools for understanding diseases, improving outcomes, containing costs and delivering better care where it's needed most,” he adds.
How is unstructured data cleaned?
The first step is to evaluate data sources. Not all unstructured data is worth analyzing, or even worth keeping, given that these activities are expensive. If the data are coming from a source that won’t yield much value for the company, the source should not be integrated into the data analytics operations.
Then one must understand how the data sources function, and to find the best way to collect the data. Attention must be paid to not corrupt or lose data, and this means using the right tools.
After collection, all data should be backed up, regardless of its condition. And, throughout the structuring process, regular backups are required. A system recovery process must also be in place to ensure that the data is not lost. Data processing firms have high level backup and recovery systems in place.
Now, some data must be eliminated. The sheer volume of data that is collected demands that whatever is unnecessary be eliminated, although expert knowledge is required to know how to choose the data to delete, so that time is not wasted on unproductive data analysis.
Then the data must be prepared. Preparing data means removing all the spaces, and resolving formatting issues. Then it is necessary to make a stack of useful data and for this task, the correct technology must be used along with expert skills. The technology will make it easy to access parts of the data stack as needed.
Part of the data analytics process is ‘ontology evaluation.’ This means being able to show the relationship between the data itself, and from where it was acquired. Data scientists use this information to obtain useful insights. It will also provide support if there is any question arising about the data in the analytics process.
Before beginning the analysis, it’s also important to classify the data into groups, and to calculate statistics. This enhances the analytics process. And now, take a random sample and create a “dictionary.” Analyzing the entire file of your data is facilitated if you start work with a sample from the collection, and structure just that part. Then that part can be used to find similar patterns in the rest of the data, working with artificial intelligence.
PointData: Work with a Partner
Pharmaceuticals companies working with roster data from memberships are faced with vast quantities of unstructured data.
As the reader will have observed, managing unstructured data and getting the best from it are activities that are best turned over to the specialist. The outsourcing partner has the right tools and the right experience to ensure that your data is turned into valuable business insights. The cost is low, certainly considering the value that the business will obtain through this process.
PointData breaks down the barriers to outsourcing roster data management. For example, the PointData API makes it fast and easy for companies to connect with its proprietary platform and to share data which PointData will safeguard in its private cloud. Once PointData has cleaned and structured the data, our rosters are logged and shared with our partner companies, and they can access as needed and be provided with updatesit in real time. If additional rosters are required, companies have only to place a request, and the new data will be processed and ready within two weeks.
Every pharmaceutical firm has unique rules for determining its roster. With PointData, it is easy to quickly modify which GPOs, IDNs and even sites are accepted at a given partner. The client is able to modify roster list through our data portal and automatically have the new data provided via our real time r API connection. Partners also receive alerts about new sites, and may choose to accept or reject any new sites before they are populated within their roster data-set.
PointData also works with its partner companies to process rebates, another activity based on accurate data management. The rebate processes are updated daily and kept accurate.
Accurate data is the most important factor in correctly processing rebates. As such, we are reviewing our sources on a daily basis for changes, quickly making updates.