If you haven’t noticed lately, we’re seeing an increase in demand for analytics driven by health reform. However, for many organizations, the culture needs to change in order to fully embrace analytics as part of the standard practice of care. Many will agree that there is too much information for clinicians to rely only on training and experience as they treat patients rather than leverage insights from analytics for clinical decision support. Providers who embrace analytics will be best positioned to improve patient care from the perspective of decreased cost, improved efficiency and enhanced patient experience.


In my role at Intel, I’m often asked where “big data” capabilities can apply to healthcare. One of the areas that always top my list are the clinical records. Roughly 70 percent of the electronic health record (EHR) includes clinically relevant information that is unstructured or in free form notes, meaning potentially critical pieces of information are not easily accessible to providers. There are many tools that can use sophisticated natural language processing techniques to pull out the clinically relevant information however, the culture has to be ready to accept those kinds of solutions and use them effectively.

 

Overcoming challenges

Personalized medicine analytics is a combination of data bits coming from multiple data sources and each comes with its own unique set of challenges. There is the payer side, the clinical side, the biology, life sciences and genomics side and finally, the patient side and the work that we’ve been doing is in all of these areas. We look at big data and health and life sciences as the aggregation of all of these different data sources and address the challenge of how this content will be generated, moved, stored, curated and analyzed. 

The goal is to take advantage of the sophisticated analytics and sophisticated technology capabilities and merge those with the changes to workflow on the healthcare side and the life sciences side and pull those two areas together to deliver care specific to an individual.  This is very different from treating a large cohort of all diabetes patients or all breast cancer patients in exactly the same way. 

Personalized medicine is really two different perspectives. First, is on the genomics side, where you include as an attribute to the patient care pathway the genome of that patient, comparing it against a reference genome to determine what is different about the patient as an individual or how their tumor genome differs from their normal DNA. Second, there’s the population health aspect to personalization; really understanding all of the data that is available in patient records whether it be structured or unstructured data and then developing care plans specific to that individual.  For example, micro segmenting a population taking into account comorbidities and socio-economic factors with the help of advanced analytic tools.

 

Safety opportunities

There was a recent article in the Journal of Patient Safety[1] that stated that there may be more than 400,000 premature deaths per year that are preventable in a hospital setting. Furthermore, 10-20 times more than that statistic cause serious harm but don’t result in death.  For example, big data and analytics are being used to help identify and diagnose sepsis earlier so that it can be treated more effectively and be less costly for the payer and provider.


A great example of using wearables to better understand disease progression is the work that Intel is conducting in partnership with the Michael J. Fox Foundation for Parkinson’s research (see video above). Individuals wearing specialized devices will be tracked around the clock; observations will be recorded 300 times a second and all information will be stored in the cloud. What this means for researchers is that they will go from evaluating a few data points per month to observing 1 gigabyte of data every day.

 

By analyzing the existing data that is available, adding wearables, improving the velocity in analyzing data, there are a lot of opportunities to improve patient safety using some of these tools. 

What questions about clinical analytics do you have? How are you using data in your practice or organization?
 


[1] James, John T. PhD. “A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.” Journal of Patient Safety (2013): http://journals.lww.com/journalpatientsafety/Fulltext/2013/09000/A_New,_Evidence_based_Estimate_of_Patient_Harms.2.aspx

 

The goal of personalized medicine is to shift from a population-based treatment approach (i.e. all people with the same type of cancer are treated in the same way) to an approach where the care pathway with the best possible prognosis is selected based on attributes specific to a patient, including their genomic profile.

 

After a patient’s genome is sequenced, it is reconstructed from the read information, compared against a reference genome, and the variants are mapped; this determines what’s different about the patient as an individual or how their tumor genome differs from their normal DNA.  This process is often called downstream analytics (because it is downstream from the sequencing process).

 

Although the cost of sequencing has come down dramatically over the years (faster than Moore’s law in fact), the cost of delivering personalized medicine in a clinical setting “to the masses” is still quite high. While not all barriers are technical in nature, Intel is working closely with the industry to remove some of the key technical barriers in an effort to accelerate this vision:

 

  • Software Optimization/Performance: While the industry is doing genomics analytics on x86 architecture, much of the software has not been optimized to take advantage of parallelization and instruction enhancements inherent with this platform
  • Storing Large Data Repositories: As you might imagine, genomic data is large, and with each new generation of sequencers, the amount of data captured increases significantly.  Intel is working with the industry to apply the Lustre (highly redundant/highly scalable) file system in this domain
  • Moving Vast Repositories of Data: Although (relatively) new technologies like Hadoop help the situation by “moving compute to the data”, sometimes you can’t get around the need to move a large amount of data from point A to point B. As it turns out, FTP isn’t the most optimal way to move data when you are talking Terabytes

 

I’ll leave you with this final thought: Genomics is not just for research organizations. It is accelerating quickly into the provider environment. Cancer research and treatment is leading the way in this area, and in a more generalized setting, there are more than 3,000 genomic tests already approved for clinical use. Today, this represents a great opportunity for healthcare providers to differentiate themselves from their competition… but in the not too distant future, providers who don’t have this capability will be left behind.

 

Have you started integrating genomics into your organization? Feel free to share your observations and experiences below.

 

Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

How frequently does your organization lose track of medical devices in your hospital?  How many precious seconds and minutes are lost tracking down the closest infusion pump, ventilator or wheelchair?  For many healthcare organizations, these are everyday (re: multiple times per day) occurrences. An even greater concern is losing track of a patient, which happens more frequently than the average citizen might expect.

 

Real-time location system (RTLS) solutions are used in healthcare to help address these challenges. Placing an RFID tag on a medical device or providing patients with wristbands that have these tags embedded can make it much easier for employees to track down devices, equipment and patients, but we have barely scratched the surface of the positive impact location, or premises-aware capabilities can have on healthcare organizations and clinical workflows.

 

mHealth is taking the world by storm, and hospitals are no exception. Many health systems are empowering their clinicians with tablets to enable more efficient access to health information and better collaboration with the patient at the point of care. What if the patient chart itself was premises-aware?  When the clinician walked up to their patient, the chart could be auto-populated with the patient’s health record. A clinician on rounds could examine the chart to ensure their next patient is in their room as expected.  As team-based care proliferates, clinicians can track down other team members more easily. Devices or storage drives that are improperly removed from the hospital can be automatically disabled, preventing an expensive security breach.

 

Some of these use cases are enabled (or improved) through the integration of premises-aware capabilities into computing devices such as tablets, laptops, PC’s and even servers and storage drives. Intel is working with the Intermountain Healthcare Transformation Lab to investigate the utility of these use cases, with some key findings described in this whitepaper and we are building these capabilities into client devices to bring solutions to the healthcare industry with the help of our strategic partners.

 

Getting a Headstart on Location-Based Services in the Enterprise

 

Quickly Find the Resources You Need With a Real-Time Location System

 

Have any of you had successes or challenges integrating location aware solutions into your organization?  Please share your thoughts below.

 

Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

It wasn’t too long ago when the coolest new gadgets were provided by your IT department. Today, the latest and greatest devices are coming from the consumer world, with IT departments being asked to support a growing number of employee owned smartphones, tablets, 2 in 1 units and everything in-between. 

 

Employees are now accustomed to the capabilities they have in their consumer lives and expect a comparable experience on the job. This transformation is not only on the “device side.” The cloud, which has fueled this device revolution is characterized by apps that are available 24x7x365 on any device, in any location. This always-on, always connected model has significantly enhanced the ability for people to collaborate with apps that enable file sharing, text/voice/video communication, note taking and the like.

 

So what happens when this consumer world and the enterprise world collide in a regulated industry like healthcare? The answer is that end-users will use these devices and apps, with or without the blessing of their IT department. This brings significant risks to healthcare organizations that are subject to stringent security and privacy regulations, breach reporting requirements, and audits. 

 

An example of this that I have heard on multiple occasions is a clinician taking a picture on their smartphone, and texting it to a colleague to get an opinion. I like this example because it simultaneously demonstrates the power of cloud, collaboration, and communication capabilities that have emerged in (relatively) recent years but also raises some obvious concerns regarding the security and privacy of PHI.

 

HIMSS Analytics collaborated with Intel earlier this year on research in this area. Forty six percent of the clinical end-users surveyed thought these kind of end-user workarounds were happening regularly in their organizations. The top reasons for these workarounds centered on security controls being too cumbersome and IT departments being too slow to enable new technologies. Co-worker collaboration was cited as (easily) the top activity leading to these workarounds.

 

ChrisG Graphic.jpg

 

So what does this research tell us? I think there are several key takeaways that IT departments should consider to limit these kind of risks:

 

  • Disallowing BYOD has limited effectiveness: Unless a healthcare organization is going to collect end-user owned devices at the door and return them at the end of the day, end-users will engage in this kind of activity on their personal devices. IT needs to think about how it can empower clinicians safely.
  • Need to offer employees compliant alternatives: There are solutions for messaging, video conferencing and file sharing that follow healthcare regulations such as HIPAA (vendors will sign BAA’s, etc.). IT needs to offer employees a comparable experience to what they are used to as consumers.
  • Co-worker collaboration is a good place to start: While there are hundreds of thousands of consumer apps, the research cited above highlights co-worker collaboration as the area that is leading to the highest number of end-user workarounds.
  • Clinical end-user experience is critical: Often times, complex/cumbersome security controls will drive activity that is out of compliance with security policy. Engaging clinicians and seeking security controls that integrate seamlessly to their workflow is essential.

 

Have any of you had success mitigating these risks in your organization?

 

Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

 

A growing number of healthcare organizations view data and analytics as instrumental to achieving their objectives for improved quality and reduced cost. Glenn D. Steele Jr., MD, CEO of Geisinger, recently outlined how his organization is using analytics to advance their population health initiatives.

 

While healthcare is currently behind other industries when it comes to use of business intelligence and analytics, this is changing. The fundamental transformation driving this change is the (worldwide) migration from volume-based care to value-based care. Organizations with the capacity to optimize care based on the latest medical literature, their patients’ specific condition(s), and, ultimately, their genomic profile, will survive. Those that are unable to update their culture, rely only on personal experience, medical training, and (often times) a trial and error approach, will be left behind.

 

The above video excerpt from the Intel Health & Life Sciences Innovation Summit panel, Care Customization: Applying Big Data to Clinical Analytics & Life Sciences, lets you hear how leaders from provider, payer, life sciences and analytics organizations describe key use cases they have implemented, infrastructure trends, and practical steps to get started.

 

While payers are typically farther along in their use of analytics than providers (particularly in the area of claims analytics to optimize claims processing and reduce false claims), providers are using analytics in the following (representative) areas:

 

  • Reduce unplanned readmissions
  • Reduce hospital acquired infections
  • Identify cost inefficiencies
  • Measure quality / outcome improvements (across a health system if applicable)

 

One of the key barriers to the use of analytics we often see in healthcare is the organizational culture. This can be challenging as culture is something that doesn’t change overnight. So what can we do about it? I will leave you with two pieces of simple advice:

 

  1. Identify a clinical champion: Culture change won’t happen based on a top-down approach or through programs driven exclusively by the IT department. There must be a partnership between IT and the clinical side of the house to identify needs and create value for the organization.
  2. Start with real use cases: Before you build anything, identify a small set of use cases that will deliver value and demonstrate early success for your organization. Build on that success to scale.

 

Are you deploying big data or analytics solutions in your organizations?

 

Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

In one of my previous blogs, I talked about the top three benefits of an open, standards-based server architecture for Epic EHR. This time around, I’d like to highlight Hackensack University Medical Center (HackensackUMC), as one of the hospitals leading the way with this approach.

 

 

HackensackUMC is a top-rated, 775-bed, non-profit teaching and research hospital based in New Jersey. Like many other healthcare organizations that use Epic EHR, it previously had Epic deployed on two computing platforms: virtualized x86 servers for the end-user computing environment, and RISC-based platforms for the backend environment (Caché database tier).

 

Challenge: Against the backdrop of fast growing data volumes, increasing performance requirements and competition for patients from other healthcare organizations, HackensackUMC wanted to reduce its costs while at the same time enabling scalability to accommodate future growth. Its previous environment ran counter to these objectives in terms of hardware/software cost, support cost and maintenance cost. Not only did it require separate groups of administrators with expertise to support the two distinct computing platforms, but it also needed to maintain separate processes for disaster recovery and business continuity.

 

Solution & Benefits: HackensackUMC decided to standardize its Epic deployment on a virtualized x86 server infrastructure for end-user and backend environments as well as its storage subsystem. It measured a 50 percent reduction in TCO with the new environment and 40-50 percent reduction in operating costs (related to hardware, software and OS support). 

 

In addition, HackensackUMC achieved a 70 percent reduction in the datacenter footprint of its Epic deployment. Virtualization of the backend environment enabled the organization to move workloads around more easily and improved application up-time with software features such as DRS (Distributed Resource Scheduler) and HA (High Availability).

 

Finally, to ensure the environment was secure, HackensackUMC relied not only on administrative and technical safeguards, but also sophisticated technical safeguards such as advanced DLP (Data Loss Prevention), in order to mitigate the risk of unauthorized access to sensitive information such as protected health information.

 

To learn about this project in more detail, visit here.

 

Have you deployed Epic EHR on a standard, x86 architecture or are you considering this approach? Please feel free to share your observations and experiences below. You can follow me on Twitter @CGoughPDX.

 

 

Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

 

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

As the healthcare industry transitions from fee-for-service to fee-for-value, and to team-based care models that require a high degree of care coordination (such as PCMH), a more holistic, 360 degree view of the patient is needed. Over time, this patient view will be built not only from traditional data types such as claims data and healthcare data (e.g. from the EHR), but also non-traditional data types such as patient or member sentiment data from social networks. So what new approaches are needed to respond to this changing data landscape?

 

Organizations need to be able to apply analytics to Big Data; data from varied repositories that exist structured, semi-structured and unstructured form.  Solutions that enable this need to be high performance, horizontally scalable, and balanced across compute, network and storage domains (e.g. to mitigate impact of I/O bottlenecks). High-performance analytics software, with capabilities such as natural language processing, machine learning, and rich visualization also enable these Big Data solutions. 

 

Innovative Payers and Providers are pursing these solutions to improve the user experience for their patients and members, better market produces and improve outreach to encourage healthy lifestyles. Take a look at this paper to learn what Blue Cross Blue Shield of North Carolina and Carolinas HealthCare System are doing in these areas.

 

The paper also describes 5 steps for getting started with Big Data:

 

1. Work with business units to articulate opportunities

2. Get up to speed on technology

3. Develop use cases

4. Identify gaps between current and future state capabilities

5. Develop a test environment

 

Payment reform and care models that foster a patient-centric approach have the potential to transform healthcare.  Analytics solutions that break down traditional data silos to develop a complete view of the patient, enable effective outreach programs, and promote collaboration across the continuum of care will be the technical foundation of this transformation.

 

Are any of you deploying Big Data or advanced analytics solutions in your organizations? Please feel free to share your observations and experiences below.  You can follow me on Twitter @CGoughPDX.

In 2012, Epic added Red Hat Enterprise Linux (RHEL) running on x86 servers to its list of supported platforms for its mission critical Electronic Health Record (EHR) database (previously, Epic only supported database software on UNIX servers).

 

To learn about this solution and key benefits firsthand, I encourage you to register for the upcoming webinar, How TriRivers Health Partners Optimized and Virtualized Its Electronic Records Infrastructure. In this space, however, I will provide an overview of the solution and describe the top three benefits over alternative, RISC-based, database platform architectures.

 

Epic’s solution for Linux on x86 is virtualized, and subsequently provides the benefits that HIT organizations have come to expect from virtualized infrastructure. Intel and VMware have collaborated closely over the years to ensure that software runs in virtual machines with near-native performance. Barriers that may have prevented some mission critical workloads from being virtualized in the past are reduced or eliminated with each passing generation of Xeon and vSphere (for example, VMware recently announced that the host-level configuration maximum for RAM has doubled from 2TB to 4TB with the introduction of vSphere 5.5).

 

With this background, here are the top three benefits of an open, standards-based, server architecture for Epic EHR:

 

Supportability: From 1996 to 2016 (estimated by IDC), the installed base of x86 servers will have increased from 56 percent of the overall server market to 98 percent. Accordingly, the number of administrators qualified to support and maintain these systems has also increased, making it easier to find qualified staff. Furthermore, the “end-user computing” side of the Epic EHR has already moved to x86.  Standardizing on one server architecture can simplify the support model by reducing training and headcount requirements)

 

Reliability: Not only is RHEL running on x86 a proven platform for hosting mission critical applications, but the Epic solution further improves reliability by virtualizing the infrastructure with VMware vSphere. The solution includes a virtualized cluster of x86 servers running the database (and associated capabilities that enable reporting, disaster recovery, etc). Should one of the hosts fail, advanced vSphere capabilities such as Distributed Resource Scheduling (DRS) and High Availability (HA) will automatically move affected VM’s to another host in the cluster

 

Flexibility: When there is an ecosystem of vendors/OEMs developing compatible (x86) systems, the end-user (HIT organization) benefits. Competition provides choice and leads to improved quality and reduced cost through economies of scale.

 

Do any of you have experience yet deploying the Epic database tier on Linux/x86? Please feel free to share your observations and experiences below. You can follow me on Twitter @CGoughPDX.

 

The cloud is becoming more prevalent in healthcare, and is proving to be one area that CIOs should not ignore.

 

In this video, Microsoft Director of Product Marketing Mark Weiner talks about cloud strategies for health IT professionals. He offers tips for healthcare CIOs on how to move big data so that it is more accessible for patients and clinicians, and manage data growth effectively.

 

What questions do you have?

In the past year, I’ve blogged about big data and cloud computing. Increasingly, the two are converging in ways that have transformative potential for healthcare and life sciences.

 

From electronic health records (EHR) and PACS (picture archiving and communications system) to genome sequencing machines, healthcare and life sciences (LS) are generating digital data at unprecedented rates. Much of the effort around “big data” is concentrated on deriving value from this information. Using distributed software frameworks such as open source Hadoop*, big data techniques will give us the analytic scale and sophistication needed to transform data into clinical wisdom and innovative treatments.

 

Cloud computing can help healthcare/LS organizations take advantage of big data analytics and accomplish other key objectives. Whether you focus on your own data center, work with a hosting provider, adopt software-as-a-service (SaaS) solutions, or combine multiple approaches, cloud models provide the organizational agility to access scalable computing resources, as you need them. Cloud computing offers well-recognized cost savings, but with all the changes and opportunities facing healthcare and life sciences organizations, the agility benefits can far outweigh them.

 

Intel recently developed two documents that can help you advance your cloud and big data strategies.

 

The New CIO Agenda takes a high-level look at key issues to consider as you move toward cloud-enabled transformation. It also provides quick examples of five leading healthcare/LS organizations that are using cloud computing to create value and enhance agility.

Big Data in the Cloud: Converging Technologies goes deeper into analytics-as-a-service models and identifies practical steps to advance your cloud-based analytics initiatives.

 

I encourage you to download these documents and use them as you evolve your cloud and big data strategies.  I’d also like to offer three specific suggestions that can move you forward and prepare you to take full advantage of cloud and big data opportunities:

 

1. Develop a roadmap. Start identifying what’s critical to keep in secure, on-premises environments and what functions you can move to external infrastructure-as-a-service (IaaS) clouds or consume as SaaS solutions.

2. Modernize your infrastructure. Even if you use SaaS heavily, you still need standards-based virtualized infrastructure to interface with external services and adjust to fast-changing demands. If you’ve already virtualized your servers, start looking at storage virtualization, unified networking, and software-defined networks.

3. Don’t let security concerns keep you out of the cloud.  There’s plenty you can do to keep data and resources secure in the cloud. Use your move into cloud computing to take a comprehensive, holistic approach to privacy and security. Adopt policy-driven, multi-layered security controls, and use hardware-enhanced security technologies to improve security and end-user experience.  As you talk to potential cloud service providers, make sure they are able to meet the requirements derived from your organization’s privacy and security policy.

 

Intel is committed to enabling healthcare and LS organizations to reap the full benefits of cloud and big data analytics. We’re designing the compute, networking, storage and software capabilities to deliver high performance solutions for large-scale cloud and analytics workloads at scale. We’re collaborating with the Open Data Center Alliance (ODCA), Cloud Security Alliance (CSA), and other industry organizations to create flexible, secure frameworks for cloud computing and big data analytics. And, we’re expanding our software portfolio with solutions such as the Intel® Distribution for Apache Hadoop*, which enables standards-based distributed analytics with robust security and management capabilities.

 

I think some of the most exciting use cases for big data analytics and cloud computing are coming from healthcare/LS. How about you? What are you doing and seeing? How can Intel help you reach your cloud and analytics objectives?

 

• Download The New CIO Agenda brochure.  

• Download the Big Data in the Cloud: Converging Technologies solution brief.

• Visit this web site to see what healthcare and life science users are doing with big data analytics and Intel® technologies.

• Follow me @CGoughPDX  on Twitter.

In my previous blog, I discussed how the 4 V’s of Big Data apply to healthcare. This time around, I would like to focus on a specific class of Big Data solutions; distributed computing solutions that utilize Hadoop. So what is it exactly? 


Hadoop is essentially a software framework that supports the storage and processing of large data sets in a highly parallelized manner.  Two of the obvious benefits that Hadoop brings to Big Data solutions are scale and flexibility:

 

Scale: You might remember from my last blog that “volume” is one of the key Big Data challenges facing health-IT organizations. Hadoop is typically deployed on a cluster of commodity servers. As computing or storage demand grows, the system is scaled by adding new nodes to the cluster. This is the “scale out” model, as opposed to “scale up” where an existing system is replaced with a new, more powerful system. The “scale out” model is less disruptive (and typically less expensive) for IT organizations than the “scale up” model.

 

Flexibility: Variety of data is another consideration that is driving interest in Hadoop. While much of healthcare data is structured, resides in a traditional relational database, and conforms to a well-defined schema, there is also a lot of unstructured information such as images, faxes, and dictated/narrative notes. This unstructured information contains significant clinical and analytical value, but many organizations are not making effective use of it today. Hadoop includes the HDFS (Hadoop Distributed File System) and HBase, a non-relational, distributed database that has no problem storing these differing data types in a schema-less fashion. Furthermore, all of this data is triple-replicated across the cluster improving the resiliency of solutions that make use of this infrastructure.

 

So how are healthcare organizations making use of Hadoop today? Take a look at a new paper which describes in more detail how the healthcare industry can take advantage of Hadoop. Examples from three domains are highlighted; provider, payor and life sciences:

 

Read Intel Distribution for Apache Hadoop Software Helps Cure Big Data Woes

 

You might have gleaned from the title of the link above that Intel is among the growing list of companies convinced that Hadoop is a critical component of the data center, and at Strata a few weeks ago, Intel announced the North American release of the Intel Distribution for Apache Hadoop (IDH). Details can be found here.

 

Do you have any thoughts or experiences to share? How has Hadoop helped your organization? Please add to the discussion below. For information on the role Intel plays in Big Data for healthcare, please visit this site: Big Data and Analytics in Healthcare and Life Sciences. You can also follow me @CGoughPDX on Twitter.

By now, many of you have likely heard about the four V’s of Big Data: Volume, Velocity, Variety and Value. The ideas behind this construct for Big Data were conceived by Gartner over a decade ago. In the coming months, you will find a number of blogs, papers, videos and other resources here that discuss Big Data solutions for healthcare and life sciences in greater detail.

 

These solutions will take advantage of advanced platform capabilities from Intel and ecosystem partners to improve the reliability, scalability, and security of these solutions.  As an introduction, I wanted to use this space to set the stage for what Big Data means to healthcare, and why these solutions are needed:

 

•    Volume: The amount of healthcare data that needs to be stored, managed, processed and protected is growing at an ever-increasing rate. This situation is exacerbated by strict data retention requirements. Medical imaging is one area where the growing volume of data is especially evident. According to IBM, 30 percent of the data stored on the world’s computers are medical images. Advances in the life sciences industry in the area of cost effective genomic sequencing are causing data storage needs in this segment to explode. Many traditional solutions have trouble scaling to accommodate this growing volume of data. “Scale-Out” solutions, where computing nodes are added to an existing cluster to meet growing demand have several advantages to traditional “Scale-Up” solutions, where one big, powerful server is replaced with another bigger more powerful server.

 

•    Velocity: Many existing analytics / data warehouse solutions are batch in nature. Meaning all the data is periodically copied to a central location in a ‘batch’ (for example every evening). Clinical and administrative end users of this information are, by definition, not making decisions based on the latest information. Use cases such as clinical decision support really only work if end-users have a complete view of the patient with the latest information. Solutions that make use of in-memory analytics or column-store databases are typically used to improve the velocity of the data or “time to insight.”

 

•    Variety: Traditional analytics solutions work very well with structured information, for example data in a relational database with a well formed schema. However, the majority of healthcare data is unstructured. Today, much of this unstructured information is unused (for example, doctor’s free form text notes describing a patient encounter). Sophisticated natural language processing techniques and infrastructure components such as Hadoop Map-Reduce are being used to normalize a variety of different data formats, unlocking the data in a sense for clinical and administrative end users.

 

•    Value: Analysis by McKinsey Global Institute has identified a potential $300 billion value for Big Data per year in the healthcare industry in the U.S. alone. The majority of this value would be realized through savings/reduced national healthcare spending. For individual healthcare organizations, Big Data value will be realized by more efficient, more scalable management and processing of a quickly growing volume of data, and by enabling faster, better-informed decisions by clinicians and administrative end users.

 

If you would like more information on the role Intel plays in Big Data for healthcare, visit this site: Big Data and Analytics in Healthcare and Life Sciences.

 

What questions do you have about Big Data in healthcare? What challenges is your organization facing in regards to the four V’s? Leave a comment or follow me on Twitter @CGoughPDX.

Filter Blog

By date:
By tag: