1 2 3 Previous Next

Intel Health & Life Sciences

262 posts

Can your data platform keep up with rising demands?

 

That’s an important question given the way changes in the healthcare ecosystem are causing the volume of concurrent database requests to soar. As your healthcare enterprise grows and you have more users needing access to vital data, how can you scale your health record databases in an efficient and cost-effective way? Decarolus.jpg

 

InterSystems recently introduced a major new release of Caché and worked with performance engineers from Epic to put it to the test. The test engineers found that InterSystems Caché 2015.1 with Enterprise Cache Protocol (ECP) technology on the Intel® Xeon® processor E7 v2 family achieved nearly 22 million global references per second (GREFs) while maintaining excellent response times.

 

That’s more than triple the load levels they achieved using Caché 2013.1 on the Intel® Xeon® processor E5 family. And it’s excellent news for Epic users who want a robust, affordable solution for scalable, data-intensive computing.

 

Intel, InterSystems, and Epic have created a short whitepaper describing these tests. I hope you’ll check it out. It provides a nice look at a small slice of the work Epic does to ensure reliable, productive experiences for the users of its electronic medical records software. Scalability tests such as these are just one part of Epic’s comprehensive approach to sizing systems, which includes a whole range of additional factors.

 

These test results also show that InterSystems’ work to take advantage of modern multi-core architectures is paying off with significant advances in ultra-high-performance database technology. Gartner identifies InterSystems as a Leader in its Magic Quadrant for Operational Database Management Systems,[1] and Caché 2015.1 should only solidify its position as a leader in SQL/NoSQL data platform computing.

                                                                    

Intel’s roadmap shows that the next generation of the Intel Xeon processor E7 family is just around the corner. I’ll be very interested to see what further performance and scalability improvements the new platform can provide for Epic and Caché. Stay tuned!

 

Read the whitepaper.

 

Join & participate in the Intel Health and Life Sciences Community: https://communities.intel.com/community/itpeernetwork/healthcare

 

Follow us on Twitter: @InterSystems, @IntelHealth, @IntelITCenter

 

Learn more about the technologies

 

Peter Decoulos is a Strategic Relationship Manager at Intel Corporation

 


[1] InterSystems Recognized As a Leader in Gartner Magic Quadrant for Operational DBMS, October 16, 2014. http://www.intersystems.com/our-products/cache/intersystems-recognized-leader-gartner-magic-quadrant-operational-dbms/

HPC User Forum Norfolk, Virginia & Bio IT World, Boston, 2015 - Observations from the desk of Aruna B. Kumar

27 April 2015

By Aruna Kumar, HPC Solutions Architect Life Science, Intel


15,000 to 20,000 variants per exome (33 Million bases) vs. 3 million single nucleotide polymorphisms per genome. HPC a clearly welcome solution to deal with the computational and storage challenges of genomics at the cross roads of clinical deployment.


At the High performance Computing User Forum held at Norfolk in mid-April, it was clear that the face of HPC is changing. The main theme was Bio-Informatics – a relatively newcomer to the user base of HPC. Bioinformatics including high throughput sequencing have introduced computing to entire new fields that have not utilized computing in the past. Just as in social sciences, these fields appear to share a thirst for large amounts of data that is still largely a search for incidental findings but seeking architectural, algorithmic optimizations and usage based abstractions simultaneously. This is a unique challenge for HPC and one that is challenging HPC systems solutions.


What does this mean for the care of our health?


Health outcomes are increasingly tied to the real time usage of vast amounts of both structured and unstructured data. Sequencing of the genome or targeted exome is distinguished by its breadth. Clinical diagnostics such as blood work for renal failure, diabetes, or aneamia that are characterized by depth of testing, genomics is characterized by breadth of testing.


As aptly stated by Dr. Leslie G. Biesecker and Dr. Douglas R. Green in 2014 New England Journal of Medicine paper, “The interrogation of variation in about 20,000 genes simultaneously can be a powerful and effective diagnostics method.”


However, it is amply clear from the work presented by Dr. Barbara Brandom, Director of Global Rare Diseases Patient Registry Data Repository (GRDR) at NIH, that the common data elements that need to be curated to improve therapeutic development and quality of life for many people with rare diseases is an relatively complex blend of structured and unstructured data.


GRDR Common Data Elements table includes contact information, socio-demographic information, diagnosis, family history, birth and reproductive history, Anthropometric information, patient-reported outcome, medications/devices/health services, clinical research and biospecimen, and communication preferences.


Now to some sizing of data and compute needs to appropriately scale the problem from a clinical perspective. Current sequencing sampling is at 30x from the Illumina HiSeqX systems. That is 46 thousand files that are generated in a three day sequencing run adding up to a 1.3 terabyte (TB) of data. This data is converted to variant calling referred to by Dr. Green earlier in the article. This analysis to the point of generating variant calling files accumulates an additional 0.5 TB of data per human genome. In order for clinicians and physicians to identify stratified subpopulation segments with specific variants, it is often necessary to sequence complex targeted regions at much higher sampling rates with longer read lengths than that generated by current 30x sampling. This will undoubtedly exacerbate an already significant challenge.


So how does Intel’s solutions fit in?


Intel Genomics Solutions together with the Intel Cluster Ready program are providing much needed sizing guidance to enable the clinicians and their associated IT data center to provide personalized medicine in the most efficient manner to scale with growing needs.


The needs broadly from a compute perspective, are to handle the volume of genomics data in a real time manner to generate alignment mapping files.  These mapping files contain the entire sequence information, the quality and position information, resulting from a largely single threaded process of converting FASTQ files into alignment mapping files. The alignment mapping files are generated as text files and converted to a more compressed binary format often known as BAM (binary alignment map) files. The difference between a reference genome and the aligned sample file (BAM) is what is contained in a variant calling files. Variants come in many forms, although the most common form is the presence or absence in a corresponding position of a single base or nucleotide. This is known as single nucleotide polymorphism (SNP). The process of research and diagnostics involves generation and visualization of BAM, SNPs and entire VCF files.


Given the lack of penetrance of incidental findings across a large numbers of diseases, the final step to impacting patient outcomes unstructured data and meta data, requires the use of parallel file systems such as Lustre and object storage technologies that provide the ability to scale-out and support personalized medicine use cases.

More details on how Intel Genomics Solutions aid the scale out to directly impact personalized medicine in a clinical environment in a future blog!

 

Follow the Intel HPC conversation and information on twitter! @cjanes85

Find out Intel’s role in Health and Life Sciences here.

Learn more about Intel in HPC at intel.com/go/hpc

Learn more about Intel’s boards and systems products at http://www.intelserveredge.com/

Intel Health & Life Sciences

The Data Stack

IT Peer Network

Recently I’ve travelled to Oxford in UK, Athens in Greece and Antalya in Turkey for a series of roundtables on the subject of genomics. While there were different audiences across the three events, the themes discussed had a lot in common and I’d like to share some of these with you in this blog.

 

The event in Oxford, GenofutureUK15, was a roundtable hosted by the Life Sciences team here at Intel and bought academics from a range of European research institutions together to discuss the future of genomics. I’m happy to say that the future is looking very bright indeed as we heard of many examples of some fantastic research currently being undertaken.

 

Speeding up Sequencing

What really resonated through all of the events though was that the technical challenges we’re facing in genomics are not insurmountable. On the contrary, we’re making great progress when it comes to the decreasing time taken to sequence genomes. As just one example, I’d highly recommend looking at this example from our partners at Dell – using Intel® Xeon® processers it has been possible to improve the efficiency and speed of paediatric cancer treatments.

 

In contrast to the technical aspects of genomics, the real challenges seem to be coming from what we call ‘bench to bedside’, i.e. how does the research translate to the patient? Mainstreaming issues around information governance, jurisdiction, intellectual property, data federation and workflow were all identified as key areas which are currently challenging process and progress.

 

From Bench to Bedside

As somebody who spends a portion of my time each week working in a GP surgery, I want to be able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. We need to move on from focusing on pockets of research and identify the low-hanging fruit to help us tackle chronic conditions, and we need to do this quickly.

 

Views were put forward around the implications of genomics transition from research to clinical use and much of this was around data storage and governance. There are clear privacy and security issues but ones for which technology already has many of the solutions.

 

Training of frontline staff to be able to understand and make use of the advances in genomics was a big talking point. It was pleasing to hear that clinicians in Germany would like more time to work with researchers and that this was something being actively addressed. The UK and France are also making strides to ensure that this training becomes embedded in the education of future hospital staff.

 

Microbiomics

Finally, the burgeoning area of microbiomics came to the fore at the three events. You may have spotted quite a lot of coverage in the news around faecal microbiota transplantation to help treat Clostridium difficile. Microbiomics throws up another considerable challenge as the collective genomes of the human microbiota contains some 8 million protein-coding genes, 360 times as many as in the human genome. That’s a ‘very’ Big Data challenge, but one we are looking forward to meeting head-on at Intel.

 

Leave your thoughts below on where you think the big challenges are around genomics. How is technology helping you to overcome the challenges you face in your research? And what do you need looking to the future to help you perform ground-breaking research?

 

Thanks to participants, contributors and organisers at Intel’s GenoFutureUK15 in Oxford, UK, Athens in Greece and HIMSS Turkey Educational Conference, in Antalya, Turkey.

 

The transition toward next-generation, high-throughput genome sequencers is creating new opportunities for researchers and clinicians. Population-wide genome studies and profile-based clinical diagnostics are becoming more common and more cost-effective. At the same time, such high-volume and time-sensitive usage models put more pressure on bioinformatics pipelines to deliver meaningful results faster and more efficiently.

 

Recently, Intel worked closely with Seven Bridges Genomics’ bioinformaticians to design the optimal genomics cluster building block for direct attachment to high-throughput, next-generation sequencers using the Intel Genomics Cluster solution. Though most use cases will involve variant calling against a known genome, more complex analyses can be performed with this system. A single 4-node building block is powerful enough to perform a full transcriptome. As demands grow, additional building blocks can easily be added to a rack to support multiple next-generation sequencers operating simultaneously.

 

Verifying Performance for Whole Genome Analysis

To help customers quantify the potential benefits of the PCSD Genomics Cluster solution, Intel and Seven Bridges Genomics ran a series of performance tests using the Seven Bridges Genomics software platform. Performance for a whole genome pipeline running on the test cluster was compared with the performance of the same software platform running on a 4-node public cloud cluster based on the previous generation Intel Xeon processor E5 v2 family.

 

The subset of the pipeline used for the performance tests includes four distinct computational phases:

 

  • Phase A: Alignment, deduplication, and sorting of the raw data reads
  • Phase B: Local realignment around Indels
  • Phase C: Base quality score recalibration
  • Phase D: Variant calling and variant quality score recalibration.

 

The results of the performance tests were impressive. The Intel Genomic Cluster solution based on the Intel® Xeon processor E5-2695 v3 family completed a whole genome pipeline in just 429 minutes versus 726 minutes for the cloud-based solution powered by the prior-generation Intel® Xeon processor E5 v2 family.

 

Based on these results, researchers and clinicians can potentially complete a whole genome analysis almost five hours sooner using the newer system. They can also use this 4-node system as a building block for constructing large, local clusters. With this strategy, they can easily scale performance to enable high utilization of multiple high-volume, next-generation sequencers.

 

For a more in-depth look at these performance tests, we will soon release a detailed abstract that will provide more detailed information about the workloads and system behavior in each phase of the analysis.

 

What questions do you have?

Security was a major area of focus at HIMSS 2015 in Chicago. From my observations, here are a few of the key takeaways from the many meetings, sessions, exhibits, and discussions in which I participated:

 

Top-of-Mind: Breaches are top-of-mind, especially cybercrime breaches such as those recently reported by Anthem and Premera. No healthcare organization wants to be the next headline, and incur the staggering business impact. Regulatory compliance is still important, but in most cases not currently the top concern.

 

Go Beyond: Regulatory compliance is necessary but not enough to sufficiently mitigate risk of breaches. To have a fighting chance at avoiding most breaches, and minimizing impact of breaches that do occur, healthcare organizations must go way beyond the minimum but sufficient for compliance with regulations. int_brand_883_PtntNrsBdsd_5600_cmyk_lowres.jpg

 

Multiple Breaches: Cybercrime breaches are just one kind of breach. There are several others, for example:


  • There are also breaches from loss or theft of mobile devices which, although often less impactful (because they often involve a subset rather than all patient records), do occur far more frequently than the cybercrime breaches that have hit the news headlines recently.

 

  • Insider breach risks are way underappreciated, and saying they are not sufficiently mitigated would be a major understatement. This kind of breach involves a healthcare worker accidentally exposing sensitive patient information to unauthorized access. This occurs in practice if patient data is emailed in the clear, put unencrypted on a USB stick, posted to an insecure cloud, or sent via an unsecured file transfer app.

 

  • Healthcare workers are increasingly empowered with mobile devices (personal, BYOD and corporate), apps, social media, wearables, Internet of Things, etc. These enable amazing new benefits in improving patient care, and also bring major new risks. Well intentioned healthcare workers, under time and cost pressure, have more and more rope to do wonderful things for improving care, but also inadvertently trip over with accidents that can lead to breaches. Annual “scroll to the bottom and click accept” security awareness training is often ineffective, and certainly insufficient.

 

  • To improve effectiveness of security awareness training, healthcare organizations need to engage healthcare workers on an ongoing basis. Practical strategies I heard discussed at this year’s HIMSS include gamified spear phishing solutions to help organizations simulate spear phishing emails, and healthcare workers recognize and avoid them. Weekly or biweekly emails can be used to help workers understand recent healthcare security events such as breaches in peer organizations (“keeping it real” strategy), how they occurred, why it matters to the healthcare workers, the patients, and the healthcare organization, and how everyone can help.

 

  • Ultimately any organization seeking achieve a reasonable security posture and sufficient breach risk mitigation must first successfully instill a culture of “security is everyone’s job”.

 

What questions do you have? What other security takeaways did you get from HIMSS?

The idea of precision medicine is simple: When it comes to medical treatment, one size does not necessarily fit all, so it's important to consider each individual's inherent variability when determining the most appropriate treatment. This approach makes sense, but until recently it has been very difficult to achieve in practice, primarily due to lack of data and insufficient technology. However, in a recent article in the New England Journal of Medicine, Dr. Francis Collins and Dr. Harold Varmus describe the President Obama’s new Precision Medicine Initiative, saying they believe the time is right for precision medicine. The way has been paved, the authors say, by several factors: Ted Slater.jpg

 

  • The advent of important (and large) biological databases;
  • The rise of powerful methods of generating high-resolution molecular and clinical data from each patient; and
  • The availability of information technology adequate to the task of collecting and analyzing huge amounts of data to gain the insight necessary to formulate effective treatments for each individual's illness.

 

The near-term focus of the Precision Medicine Initiative is cancer, for a variety of good reasons. Cancer is a disease of the genome, and so genomics must play a large role in precision medicine. Cancer genomics will drive precision medicine by characterizing the genetic alterations present in patients' tumor DNA, and researchers have already seen significant success with associating these genomic variations with specific cancers and their treatments. The key to taking full advantage of genomics in precision medicine will be the use of state-of-the-art computing technology and software tools to synthesize, for each patient, genomic sequence data with the huge amount of contextual data (annotation) about genes, diseases, and therapies available, to derive real meaning from the data and produce the best possible outcomes for patients.

 

Big data and its associated techniques and technologies will continue to play an important role in the genomics of cancer and other diseases, as the volume of sequence data continues to rise exponentially along with the relevant annotation. As researchers at pharmaceutical companies, hospitals and contract research organizations make the high information processing demands of precision medicine more and more a part of their workflows, including next generation sequencing workflows, the need for high performance computing scalability will continue to grow. The ubiquity of genomics big data will also mean that very powerful computing technology will have to be made usable by life sciences researchers, who traditionally haven't been responsible for directly using it.

 

Fortunately, researchers requiring fast analytics will benefit from a number of advances in information technology happening at just the right time. The open-source Apache Spark™ project gives researchers an extremely powerful analytics framework right out of the box. Spark builds on Hadoop® to deliver faster time to value to virtually anyone with some basic knowledge of databases and some scripting skills. ADAM, another open-source project, from UC Berkeley's AMPLab, provides a set of data formats, APIs and a genomics processing engine that help researchers take special advantage of Spark for increased throughput. For researchers wanting to take advantage of the representational and analytical power of graphs in a scalable environment, one of Spark's key libraries is GraphX. Graphs make it easy to associate individual gene variants with gene annotation, pathways, diseases, drugs and almost any other information imaginable.

 

At the same time, Cray has combined high performance analytics and supercomputing technologies into the Intel-based Cray® Urika-XA™ extreme analytics platform, an open, flexible and cost-effective platform for running Spark. The Urika-XA system comes preintegrated with Cloudera Hadoop and Apache Spark and optimized for the architecture to save time and management burden. The platform uses fast interconnects and an innovative memory-storage hierarchy to provide a compact and powerful solution for the compute-heavy, memory-centric analytics perfect for Hadoop and Spark.

 

Collins and Varmus envision more than 1 million Americans volunteering to participate in the Precision Medicine Initiative. That's an enormous amount of data to be collected, synthesized and analyzed into the deep insights and knowledge required to dramatically improve patient outcomes. But the clock is ticking, and it's good to know that technologies like Apache Spark and Cray's Urika-XA system are there to help.

 

What questions do you have?

 

Ted Slater is a life sciences solutions architect at Cray Inc.

Dr. Peter White is the developer and inventor of the “Churchill” platform, and serves as GenomeNext’s principal genomic scientist and technical advisor.

 

Dr. White is a principal investigator in the Center for Microbial Pathogenesis at The Research Institute at Nationwide Children’s Hospital and an Assistant Professor of Pediatrics at The Ohio State University. He is also Director of Molecular Bioinformatics, serving on the research computing executive governance committee, and Director of the Biomedical Genomics Core, a nationally recognized microarray and next-gen sequencing facility that help numerous investigators design, perform and analyze genomics research. His research program focuses on molecular bioinformatics and high performance computing solutions for “big data”, including discovery of disease associated human genetic variation and understanding the molecular mechanism of transcriptional regulation in both eukaryotes and prokaryotes. Dr. Peter White.jpg

 

We recently caught up with Dr. White to talk about population scale genomics and the 1000 Genomes Project.

 

Intel: What is population scale genomics?

 

White: Population scale genomics refers to the large-scale comparison of sequenced DNA datasets of a large population sample. While there is no minimum, it generally refers to the comparison of sequenced DNA samples from hundreds, even thousands, of individuals with a disease or from a sampling of populations around the world to learn about genetic diversity within specific populations.

 

The human genome is comprised of approximately 3,000,000,000 DNA base-pairs (nucleotides). The first human genome sequence was completed in 2006, the result of an international effort that took a total of 15 years to complete. Today, with advances in DNA sequencing technology, it is possible to sequence as many as 50 genomes per day, making it possible to study genomics on a population scale.

 

Intel: Why does population scale genomics matter?

 

White: Population scale genomics will enable researchers to understand the genetic origins of disease. Only by studying the genomes of 1000’s of individuals will we gain insight into the role of genetics in diseases such as cancer, obesity and heart disease. The larger the sample size that can be analyzed accurately, the better researchers can understand the role that genetics plays in a given disease, and from that we will be able to better treat and prevent disease.

 

Intel: What was the first population scale genomic analysis?

 

White: The 1000 Genomes Project is an international research project, through the efforts of a consortium of over 400 scientists and bioinformaticians, set out to establish a detailed catalogue of human genetic variation. This multi-million dollar project was started in 2008 and sequencing of 2,504 individuals was completed in April 2013. The data analysis of the project was completed 18 months later, with the release of the final population variant frequencies in September 2014. The project resulted in discovery of millions of new genetic variants and successfully produced the first global map of human genetic diversity.

 

Intel: Can analysis of future large population scale genomics studies be automated?

 

White: Yes. The team at GenomeNext and Nationwide Children’s Hospital were challenged to analyze a complete population dataset compiled by the 1,000 Genomes Consortium in one week as part of the Intel Heads In the Clouds Challenge on Amazon Web Services (AWS).  The 1000 Genomes Project is the largest publically available dataset of genomic sequences, sampled from 2,504 individuals from 26 populations around the world.

 

All 5,008 samples (2,504 whole genome sequences & 2,504 high depth exome sequences) were analyzed on GenomeNext’s Platform, leveraging its proprietary genomic sequence analysis technology (recently published in Genome Biology) operating on the AWS Cloud powered by Intel processors. The entire automated analysis process was completed in one week, with as many as 1,000 genome samples being completed per day, generating close to 100TB of processed result files. The team found there was a high-degree of correlation with the original analysis performed by the 1,000 Genomes Consortium, with additional variants potentially discovered during the analysis performed utilizing GenomeNext’s Platform.

 

Intel: What does GenomeNext’s population scale accomplishment mean?

 

White: GenomeNext believes this is the fastest, most accurate and reproducible analysis of a dataset of this magnitude. One of the benefits of this work will enable researchers and clinicians, using population scale genomic data to distinguish common genetic variation as discovered in this analysis, from rare pathogenic disease causing variants. As populations scale genomic studies become routine, GenomeNext provides a solution through which the enormous data burden of such studies can be managed and by which analysis can be automated and results shared with scientists globally through the cloud. Access to a growing and diverse repository of DNA sequence data, including the ability to integrate and analyze the data is critical to accelerating the promise of precision medicine.

 

Our ultimate goals are to provide a global genomics platform, automate the bioinformatics workflow from sequencer to annotated results, provide a secure and regulatory compliant platform, dramatically reduce the analysis time and cost, and remove the barriers of population scale genomics.

With apologies and acknowledgments to Dr. James Cimino, whose landmark paper on controlled medical terminologies still sets a challenging bar for vocabulary developers, standards organizations and vendors, I humbly propose a set of new desiderata for analytic systems in health care. These desiderata are, by definition, a list of highly desirable attributes that organizations should consider as a whole as they lay out their health analytics strategy – rather than adopting a piecemeal approach.

 

The problem with today’s business intelligence infrastructure is that it was never conceived of as a true enterprise analytics platform, and definitely wasn’t architected for the big data needs of today or tomorrow. Many, in fact probably most, health care delivery organizations have allowed their analytic infrastructure to evolve in what a charitable person might describe as controlled anarchy. There has always been some level of demand for executive dashboards which led to IT investment in home grown, centralized, monolithic and relational database-centric enterprise data warehouses (EDWs) with one or more online analytical processing-type systems (such as Crystal Reports, Cognos or BusinessObjects) grafted on top to create the end-user-facing reports. Graham_Hughes.jpg

 

Over time, departmental reporting systems have continued to grow up like weeds; data integration and data quality has become a mini-village that can never keep up with end-user demands. Something has to change. Here are the desiderata that you should consider as you develop your analytic strategy:

 

Define your analytic core platform and standardize. As organizations mature, they begin to standardize on the suite of enterprise applications they will use. This helps to control processes and reduces the complexity and ambiguity associated with having multiple systems of record. As with other enterprise applications such as electronic health record (EHR), you need to define those processes that require high levels of centralized control and those that can be configured locally. For EHR it’s important to have a single architecture for enterprise orders management, rules, results reporting and documentation engines, with support for local adaptability. Similarly with enterprise analytics, it’s important to have a single architecture for data integration, data quality, data storage, enterprise dashboards and report generation – as well as forecasting, predictive modelling, machine learning and optimization.

 

Wrap your EDW with Hadoop. We’re entering an era where it’s easier to store everything than decide which data to throw away. Hadoop is an example of a technology that anticipates and enables this new era of data abundance. Use it as a staging area and ensure that your data quality and data transformation strategy incorporates and leverages Hadoop as a highly cost-effective storage and massively scalable query environment.

 

Assume mobile and web as primary interaction. Although a small number of folks enjoy being glued to their computer, most don’t. Plan for this by making sure that your enterprise analytic tools are web-based and can be used from anywhere on any device that supports a web browser.

 

Develop purpose-specific analytic marts. You don’t need all the data all the time. Pick the data you need for specific use cases and pull it into optimized analytic marts. Refresh the marts automatically based on rules, and apply any remaining transformation, cleansing and data augmentation routines on the way inbound to the mart.

 

Leverage cloud for storage and Analytics as a Service (AaaS). Cloud-based analytic platforms will become more and more pervasive due to the price/performance advantage. There’s a reason that other industries are flocking to cloud-based enterprise storage and computing capacity, and the same dynamics hold true in health care. If your strategy doesn’t include a cloud-based component, you’re going to pay too much and be forced to innovate at a very slow pace.

 

Adopt emerging standards for data integration. Analytic insights are moving away from purely retrospective dashboards and moving to real-time notification and alerting. Getting data to your analytic engine in a timely fashion becomes essential; therefore, look to emerging standards like FHIR, SPARQL and SMART as ways to provide two-way integration of your analytic engine with workflow-based applications.

 

Establish a knowledge management architecture. Over time, your enterprise analytic architecture will become full of rules, reports, simulations and predictive models. These all need to be curated in a managed fashion to allow you to inventory and track the lifecycle of your knowledge assets. Ideally, you should be able to include other knowledge assets (such as order sets, rules and documentation templates), as well as your analytic assets.

 

Support decentralization and democratization. Although you’ll want to control certain aspects of enterprise analytics through some form of Center of Excellence, it will be important for you to provide controlled access by regional and point-of-service teams to innovate at the periphery without having to provide change requests to a centralized team. Centralized models never can scale to meet demand, and local teams need to be given some guardrails within which to operate. Make sure to have this defined and managed tightly.


Create a social layer. Analytics aren’t static reports any more. The expectation from your users is that they can interact, comment and share the insights that they develop and that are provided to them. Folks expect a two-way communication with report and predictive model creators and they don’t want to wait to schedule a meeting to discuss it. Overlay a portal layer that encourages and anticipates a community of learning.

 

Make it easily actionable. If analytics are just static or drill-down reports or static risk scores, users will start to ignore them. Analytic insights should be thought of as decision support; and, the well-learned rules from EHRs apply to analytics too. Provide the insights in the context of my workflow, make it easy to understand what is being communicated, and make it easily actionable – allow users to take recommended actions rather than trying to guess what they might need to do next.

 

Thanks for reading, and please let me know what you think. Do these desiderata resonate with you? Are we missing anything essential? Or is this a reasonable baseline for organizations to get started?

Dr. Graham Hughes is the Chief Medical Officer at SAS and an industry expert in SAS Institute’s Healthcare & Life Sciences Center for Health Analytics and Insights (CHAI). A version of this post was originally published last August on A Shot in the Arm, the SAS Health and Life Sciences Blog.

Even when Patient Health Information is effectively shared across a healthcare network – provider organizations still struggle to share patient data across organizational boundaries to meet the healthcare needs of increasingly mobile patient populations.

 

For instance, consider the healthcare needs of retirees escaping the deep freeze of a Midwestern winter for the warmer climate of Florida. Without full access to unified and comprehensive patient data, healthcare providers new to the patient run the risk of everything from ordering expensive, unnecessary tests to prescribing the wrong medications. In these situations, at minimum, the patient’s quality of care is suboptimal. And in the worst-case scenarios, a lack of interoperability across networks can lead to devastating patient outcomes. noland joiner 2.jpg

 

System and process

 

To ensure better patient outcomes, healthcare organizations require a level of system and process interoperability that enables sharing of the real-time patient data that leads to informed provider decision-making, decreased expenses for payer organizations – and ultimately enhanced patient-centered care across network and geographic boundaries. Effective interoperability means everyone wins.

 

Information support

 

To keep efficiency, profitability and patient-centered care all moving in the right direction, healthcare organizations need complete visibility into all critical reporting metrics across hospitals, programs and regions. In answer to that need, Intel has partnered with MarkLogic and Tableau to develop an example of the business intelligence dashboard of the future. This interactive dashboard runs on Intel hardware and MarkLogic software. Aptly named, Tableau’s visually rich and shareable display features critical, insightful analytics that tell the inside story behind each patient’s data. This technology empowers clinicians new to the patient with a more holistic view of his health.

 

Combined strength

 

By combining MarkLogic’s Enterprise NoSQL technology with Intel’s full range of products, Tableau is able to break down information silos, integrate heterogeneous data, and give access to critical information in real-time – providing a centralized application of support for everything from clinical informatics and fraud prevention to medical research and publishing. Tableau powered by MarkLogic and Intel delivers clear advantages to payers, providers and patients.

 

To see Intel, MarkLogic, and Tableau in action, please stop by and visit with us at HIMSS in the Mobile Health Knowledge Zone, Booth 8368. We’ll guide you through an immersive demonstration that illustrates how the ability to integrate disparate data sources will lead you to better outcomes for all stakeholders in the healthcare ecosystem, including:

  • Patient empowerment
  • Provider-patient communication
  • Payer insight into individuals or population heath
  • Product development (drug and device manufacturers)

 

What questions do you have about interoperability?

 

Noland Joiner is Chief Technology Officer, Healthcare, at MarkLogic.

As we all know, healthcare is a well regulated and process-driven industry. The current timeline for new research and techniques to be adopted by half of all physicians in the United States of America is around 17 years. While many of these regulations and policies are created with the best of intentions, they are often designed by criteria that doesn’t have the patient in mind, but play more to the needs of our billing needs, reimbursements, and being efficient as organizations. Rarely do we see these being designed with the experience and interactions with a patient.

 

The challenge for technology at the moment, especially for the physician, is how to move beyond the meaningful use criteria that the federal government has adopted. Doug Wood.png

 

Outdated record rules

 

We are currently working with medical record rules and criteria that are 20 years old, and trying to adapt and apply them to our electronic records. The medical records have become a repository of large amounts of waste of words and phrases that have little meaning to the physician/patient interaction. For me to wade through a medical record (because of the meaningful use criteria and structure of medical records) it is very difficult to find relevant information.

 

As a person involved in quality review, what I find more and more in electronic records is that it’s very easy to potentiate mistakes and errors. One part of the whole system that I find uncontainable is to have the physician, who is one of the most costly members of the team, take time to ostensibly be a clerk, or scribe, and take time to fill out the required records.

 

Disrupts visits

 

The problem that we can identify with all of this, at least in the office visit portion, is that it disrupts the visit with the patient. It focuses the conversation to adhere to getting the clerical tasks necessary for meaningful use criteria completed. And to me, there’s nothing more oppressive in this interaction than to doing this clerical work, than when it’s done electronically, and getting worse.

 

So if we look at this situation from the perspective of people (both the patient and physician), and how we can use electronic tools, we could rapidly be liberated from the oppression of regulatory interactions. It would be so easy, right now, to capture patient’s activities and health to create a historical archive. This could be created in some template using video and audio technologies, and language dictation software that could give the physician much more content about what is going on.

 

I say this after visiting the Center for Innovation team at the Mayo Clinic Scottsdale location, where they are conducting a wearables experiment, on which the provider is wearing Google Glass when at an office visit with a patient.

 

The experiment had a scribe in another room observing and recording the interaction through the Glass feed, both video and audio, to capture the visit and create the medical record. As I looked through the note that was put together, it was a good note. It met the requirements for the bureaucrats, but it missed the richness of the visit that I observed, and it missed what the patient needed. It missed the steps and instructions that the physician covered with the patient. There is no place to record this in the current set up.

 

Easy review access

 

Just think if that interaction was available, through a HIPAA compliant portal, for the patient and provider to access. When the patient goes home, and a few days later asks, “What did my doctor cover during my visit,” they would be able to watch and hear the conversation right there. They might have brochures and literature that was given to them, but imagine if they had access to that video and audio to replay and watch again.

 

It seems to me that we have the technology at hand to make this a viable reality.

 

The biggest challenge here is to convince certain parties, like the Federal Government and Medicare, that there is a better way to do this, and that these are more meaningful ways. Recalling who the decision makers are that designed these processes and regulations, we must work to change the design criteria from that of a compliance perspective, to one where the needs of the patient come first.

 

That’s where I think we have the great opportunities and great challenges to turn this around. If we think for a minute, and decide to do away with all this useless meaningful criteria, and instead say, “Let’s go back and think how we can make the experience better for the patient,” and leverage technologies to do just that, we would be much better off.

 

What questions do you have?

 

Dr. Douglas Wood is a practicing cardiologist and the Medical Director for the Mayo Clinic’s Center for Innovation.

I’ve looked at many aspects of Bring Your Own Device in healthcare throughout this series of blogs, from the costs of getting it wrong to the upsides and downsides, and the effects on network and server security when implementing BYOD.

 

I thought it would be useful to distil my thoughts around how healthcare organisations can maximize the benefits of BYOD into 5 best practice tips. This is by no means an exhaustive list but provides a starting point for the no doubt lengthy conversations that need to take place when assessing the suitability of BYOD for an organisation.

 

If you’ve already implemented BYOD in your own healthcare organisation then do register and leave a comment below with your own tips – I know this community will appreciate your expertise.

 

Develop a Bring Your Own Device policy


It sounds like an obvious first step doesn’t it? However, I’d like to stress the importance of getting the policy right from day one. Do your research with clinical staff, understand their technology and process needs, identify their workarounds and ask how you can make their job of patient care easier. Development of a detailed and robust BYOD policy may take much longer than anticipated, and don’t forget that acceptance and inclusion of frontline staff is key to its success. Alongside the nuts and bolts of security it’s useful to explain the benefits to healthcare workers to get their trust, confidence and buy-in from the start.


Mobile Device Management

 

It’s likely that you have the network/server security aspect covered off under existing corporate IT governance. A key safeguard in implementing BYOD is Mobile Device Management (MDM), which should help meet your organisation’s specific security requirements. Some of these requirements may include restrictions on storing/downloading data onto the device, password authentication protocols and anti-virus/encryption software. Healthcare workers must also be given advice on what happens in the event of loss or theft of the mobile device, or when they leave the organisation in respect of remote deletion of data and apps. I encourage you to read our Case Study on Madrid Community Health Department on Managing Mobile for a great insight into how one healthcare organisation is assessing BYOD.


Make it Inclusive


For a healthcare organisation to fully enjoy the benefits of a more mobile and flexible workforce through BYOD they need to ensure that as many workers as possible (actually, I’d say all) can use their personal devices. It can be complex but some simple stipulations in the BYOD policy, such as requiring the user to ensure that they have the latest operating system and app updates installed at all times, can help to mitigate some of the risk. Also I would be conscious of the level of support an IT department can give from both a resource (people) and knowledge of mobile operating systems point of view. Ultimately, the most effective BYOD policies are device agnostic.


Plan for a Security Breach

 

The best BYOD policies plan for the worst, so that if the worst does happen it can be managed efficiently, effectively and have as little impact as possible on the organisation and patients. This requires creation of a Security Incident Response Plan. Planning for a security breach may prioritise fixing the weak link in the security chain, identifying the type and volume of data stolen and reporting the breach to a governmental department. For example, the Information Commissioner’s Office (ICO) in the UK advises that ‘although there is no legal obligation on data controllers to report breaches of security, we believe that serious breaches should be reported to the ICO.’


Continuing Assessment


From a personal perspective we all know how quickly technology is changing and improving our lives. Healthcare is no different and it’s likely that the tablet carried by a nurse today has more computing power than the desktop of just a couple of years ago. With this rapid change comes the need to continually assess a BYOD policy to ensure it meets the advances in hardware and software on a regular basis. The risk landscape is also constantly evolving as new apps are installed, new social media services become available, and healthcare workers innovate new ways of collaborating. Importantly though, I stress that the BYOD policy must also take into account the advances in the working needs and practices of healthcare workers. We’re seeing some fantastic results from improved mobility, security and ability to store and analyse large amounts of data across the healthcare spectrum. We cannot afford for this progress to be hindered by out-of-date policies. The policy is the foundation of the security and privacy practice. A good privacy and security practice enables faster adoption, use, and realisation of the benefits of new technologies.

 

I hope these best practice tips have given you food for thought. We want to keep this conversation about the benefits of a more mobile healthcare workforce going so do follow us on Twitter and share our blogs amongst your personal networks.

 

BYOD in EMEA series: Read Part Three

Join the conversation: Intel Health and Life Sciences Community

Get in touch: Follow us via @intelhealth

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Today, many healthcare organizations are experimenting with and implementing the art of virtual care. Innovation in technology is finally able to address the need to go beyond brick and mortar and drive “care anywhere” when it is needed. While technology is enabling providers to drive virtual care initiatives to increase quality of care, provide patients with more access, and improve patient empowerment, therein lies the question: How secure is the ecosystem in which more and more personal health information is being exposed to?

 

Current Technology

 

First, let’s look at where we are currently. Healthcare is one of the most exciting industries today, thanks to digital technology and the industry and governments coming together to address some major pain points that existed for many decades. We are finally at a point where many of the “what if we could” ideas that clinicians and patients worldwide had can be realized. For example, many providers are driving initiatives around virtual care, including telehealth, and remote patient monitoring leveraging technology that can reside in patients’ homes.

 

In the future, payers may be able to use HIT and device information to drive big data and provide the optimal plans for patients in different demographics given the geographic region where they live, family history, and life habits. Last, but not least, patients are empowered with tools, devices, and information to proactively manage their own health the way that really makes sense, outside the hospital.

 

Wearables and Mobility

 

Simple forms of home monitoring have existed for years; however, today, there is a big disruption in the market due to new form factors of clinical wearables and connectivity solutions, which are easier to use and have a greater ability to transfer and provide access to patient data. Smartphones and tablets have become an integral part of people’s lives and can serve as a tool for telehealth, as well as a hub for clinical patient information. This makes the implementation of virtual care much easier, allowing patients to have options to cost-effective solutions and allowing them to manage their health more proactively. photo for kay 1.jpg

 

At the same time, this proliferation of devices and data also increases the risk of data attack. Any points the data is collected, used, or stored can be at risk and needs to be secured. If the wearable devices that are collecting the data are outside the U.S. and this data is being uploaded to the cloud inside the U.S., then the use of these wearables can represent trans-border data flow which can be a significant concern, especially for countries with strong data protection laws such as in EU. We need to be more responsible on how the data can be captured, transmitted, and protected. At Intel, we provide security solutions that integrate well into the user experience such as fast encryption and cost reduction. We are working with our customers to develop the most effective solution for data privacy and security.

 

Key Challenges

 

Overall, it is wonderful to see so many healthcare institutions driving virtual care. Care is definitely moving outside the traditional venues to new more natural settings closer to what patients need. However, this also exposes more patient health information to be outside the hospital walls and outside the walls of patients’ homes.

 

As such, at Intel, when we design a solution, we enable security in our core HW technology. And this provides differentiation in how the users experience security. To have a great experience, the end user should not be subjected to data breaches or other security incidents, and solutions need to be smarter about detecting user context and risks, and guiding the user to safer alternatives. Devices need to function reliably and be free of malware.

 

In addition, we are focused on driving consistent security performance across the compute continuum of care.

 

That brings us back to the original question: How secure is the ecosystem? Security will play a key role in ensuring a safe solution that providers, payers, and patients can all rely on. Security would also be key to enabling faster adoption of virtual care. Depending on the types of patient information collected, used, retained, disclosed, or shared, and how to store/dispose it, security can be designed to optimally protect privacy. It is a complex area to address, but given the value of health data, I am hopeful that organizations will start to design their virtual care solutions and ecosystem with security as one of the key pillars.

 

What questions do you have?

 

Kay Eron is General Manager Health IT & Medical Devices at Intel.

Popularly referred to as next-generation sequencing (NGS), or high-throughput sequencing, NGS is the catch-all term used to describe a number of different modern sequencing technologies including Illumina (Solexa), Roche 454, Ion Torrent (Proton/PGM), and SOLiD. This has allowed us to sequence DNA and RNA much faster and cheaper than the previously used Sanger sequencing, and has revolutionized the study of genomics and molecular biology.

 

The cost of genomic sequencing has also come a long way. From $3 billion to sequence the first human genome, it cost about $100 million per genome in 2001, and as of January 2014, the cost is about $1,000. Compared to Moore’s law that observes computing doubles every two years, the cost of sequencing a genome is falling five to 10 times annually.

 

The issue now is computing power to analyze this data. Newer sequencers are now producing four times the data in half the time. Intel® technologies like Xeon® and Xeon® Phi®, SSDs, 10/40 GbE networking solutions, Omni-Path fabric interconnect, Intel Enterprise Edition for Lustre (IEEL), along with partners like Cloudera and Amazon Web Services, are helping to cut down the time for secondary analysis from weeks to hours. Photo for ketan 1.jpg

 

Genomic information is now catalogued and used for advancing precision medicine. For example, genomic information from TCGA (The Cancer Genome Atlas) has led to developments and FDA approval for certain cancer treatments. Currently, there are about 34 FDA-approved targeted therapies like Gleevec that treat gastrointestinal stromal tumors by blocking tyrosine kinase enzymes. Though approved by the FDA in 2001, it was further granted efficacy to treat 10 more types of cancers in 2011.

 

Technical Challenges

 

Sequencers are now producing four times more data in 50 percent less time at about 0.5TB/device/day. This is a lot of data. Newer modalities like 4-D imaging are now producing 2 TB/device/day. The majority of the software used for informatics and analytics is open sourced and the market is very fragmented.

 

Once the data is generated, the burden of storing, managing, sharing, ingesting, and moving it has its own set of challenges.

 

Innovation in algorithms and techniques is outpacing what IT can support, thus requiring flexibility and agility in infrastructures.

 

Collaboration across international boundaries is an absolute necessity and that introduces challenges with security and access rights.

 

Finally, as genomics makes its way into clinics, clinical guidelines like HIPAA will kick in.

 

At the clinical level, you have barriers around the conservation and validity of the sample, validity and repeatability of laboratory results, novelty and interpretation of biomarkers, merging genomics data with clinical data, actionability and eventually changing the healthcare delivery paradigm.

 

There are too few clinical specialists and key healthcare professionals, like pharmacists, who are trained in clinical genomics. New clinical pathways and guidelines will have to be created. Systems will need to be put in place to increase transparency and accountability of different stakeholders of genomic data usage. Equality and justice need to be ensured and protection against discrimination needs to be put in place (GINA).

 

Reimbursement methods need to consider flexible pricing for tailored therapeutics responses along with standardization and harmonization (CPT codes).

 

Path Forward

 

Looking ahead, we need to develop a standardized genetic terminology (HL7, G4GH, eMERGE) and make sure EHRs support the ability to browse sequenced data. Current EHRs will need standards around communication, querying, storing, and compressing large volumes of data while interfacing with EHRs’ identifiable patient information.

 

Photo for ketan 2.jpg

 

Intel is partnering with Intermountain Health to create a new set of Clinical Decision Support (CDS) applications by combining clinical, genomic, and family health history data. The goal is to promote widespread use of CDS that will help clinicians/counselors in assessing risk and assist genetic counselors in ordering genetic tests.

 

The solution will be agnostic to data collection tools, scale to different clinical domains and other healthcare institutions, be standards based where they exist, work across all EHRs, leverage state-of-the-art technologies, and be flexible to incorporate other data sources (e.g., imaging data, personal device data).

 

What questions do you have?

 

Ketan Paranjape is the general manager Life Sciences at Intel Corporation.


Improving care for patients is a common goal for our healthcare team and partners, so I’m really excited to be able to share the outcome of a collaborative project we’ve been working on with the Spanish Society of Family Medicine and Community (semFYC).

 

Together we have created a tablet featuring an app store exclusively for doctors. Meeting the needs of healthcare professionals with an easy-to-use mobile device combined with medical applications that have the endorsement of a scientifically-recognised body in semFYC is incredibly exciting for all involved and a step-change for the way GPs and physicians access the latest clinical information.

 

Josep Basora, President of semFYC, spoke to me about the tablet and app store created in partnership with Intel: “When I started to drive this project I wanted to facilitate the right information, at the appropriate place and by the authorised time. Mobility is one of the keys that defines the work of the current healthcare professional.”

 

“For a physician, the possibility to use applications that have the endorsement of a scientific society such as semFYC has real significance, as it has the full assurance that the tool used is supported by rigorous governance. This has certainly had a positive effect on both resource optimisation and improvement of patient service.”

 

semFYC brings together more than 17 Societies of Family Medicine and Community in Spain covering a total of 19,500 GPs with a focus on improving the knowledge and skills of its members. The app store, which exclusively features medical applications, automatically updates installed apps with the latest information around procedures and drugs, thus reducing the time GPs require to update their knowledge and consequently increasing the quality of patient care.

 

Take a look at the video above to find about more about the tablet and health app store created by Intel with semFYC.

 

A perfect storm of market conditions is forming that will likely propel consumer health near the top of many enterprise priority lists and justify its estimated 40 percent CAGR in 2015.

Intel has been the driving force behind the global technology revolution for more than 40 years, and we’ve seen the dramatic impact of technology on healthcare. Looking ahead, here are the five drivers that we see fueling growth in consumer health:

Payment Reform

 

One of the most important conditions is payment reform. As the basis for reimbursement shifts away from fee-for-service and toward quality-based outcomes in the U.S., providers will extend the continuum of care far beyond their hospitals to more accurately quantify value after discharge.

Data

 

One of the best ways to optimize care and demonstrate effectiveness is to implement a holistic approach for understanding a person’s status by deriving actionable data about her individually and continuously from multiple sources — including consumer devices.

Photo for MJ blog HIMSS.jpg

Consumer Involvement

 

Consumer empowerment is also going to play a large role. It began with the shift from a business model that was traditionally B2B to one that was more B2C as commercial health insurers positioned themselves to personally engage millions of newly eligible customers. Now, consumer health solutions enable all payer organizations — private, public, employer — to promote healthy behaviors and timely preventative care that has been shown to reduce the occurrence of costly acute emergencies. Ultimately, consumers will have the ability to be more active in managing their own care, with the expectation of access to more of their health information anytime.


Baby Boomers

 

A demographic shift is also fueling this growth. Every day, 10,000 baby boomers celebrate their 65th birthday in the U.S., and that trend will continue until at least 2019. Unfortunately, 90 percent of them, with help from their family caregivers in some cases, are managing at least one chronic medical condition (860 million people worldwide). As telehealth becomes more widely adopted (and reimbursed), remote doctor consultations will increasingly rely on consumer health technologies to improve chronic disease management and ease the stress on a limited pool of primary care physicians.

 

Worldwide Approach

 

Many fast-growing emerging global markets, like China and India, are exhibiting strong appetites for consumer health solutions that can add value while supplementing recent government efforts to provide more efficient virtual care to their significant aging and rural populations. As more technology vendors from the region offer innovative products at very competitive price points, access and adoption will continue to climb at a healthy pace, contributing to notable growth of the consumer health market segment regionally and worldwide.

 

Of course, one of the biggest hurdles to overcome is alignment of priorities for all major stakeholders. You need a consumer-centered design, an evaluation of clinical workflow integration, and a way to measure the business impact of the goals.

 

What questions do you have? What other drivers do you see impacting consumer health?

 

Michael Jackson is General Manager, Consumer Health at Intel Corporation.

Filter Blog

By date:
By tag: