There is talk in the medical industry of helping providers practice at the maximum of their licensure. One reason for this is that we don't have enough primary care physicians, and, in part, can address this gap with physician assistants, nurse practitioners, registered nurses, and a myriad of non-traditional team members like pharmacists and health coaches. It so happens that all of these individuals can be more cost-effective than physicians.

 

Medical assistants can do more than escort patients to an exam room and take vital signs. Nurse practitioners have the training and ability to move beyond acute illness diagnosis & treatment to engage in chronic disease management. Collaborative practice agreements allow pharmacists to manage complex patients on complicated medication regimens, assisting the healthcare team with their unique expertise in drug effects and interactions. As for doctors, the highest paid part of that pyramid, how do we make sure they are doing the things that only doctors can do while engaging their team to help with the rest?

 

Elevating the Patient Role

 

There's one team member who is often left out of this conversation -- the patient. How do we engage patients at the maximum of their ability? Patients are capable of doing a lot of more to manage their health if we would just give them the proper training and tools. By the way, patients are free. We don’t have to pay them to take care of themselves.

 

mHealth is the platform on which healthcare will move forward. What role can and should the users of mHealth technologies play? How do we maximize the impact that each user group can have on the health outcomes we are all working towards? How does everyone practice at the maximum of his or her licensure in a mhealth world?

 

It's important to remember the simple goal we are all working towards. We are trying to help people live healthier lives and trying to do it cost effectively. Patients are indispensable in working towards this goal. Patients have access to themselves all day, every day. They are on the front lines of healthcare, and they don’t cost anything.

 

Merging Patients and mHealth

 

In fact, according to an ONC-funded pilot project at Geisinger Health System, patients help to spot errors such as outdated information and omissions such as medications prescribed by another provider. Personal health records can drive these efforts.

 

  • Patients are eager to provide feedback on their medication list – 30 percent of patient feedback forms were completed and in 89 percent of cases, patients requested changes to their medication record.

 

  • Patient feedback is accurate and useful – on average, patients had 10.7 medications listed, with 2.4 requested changes. In 68 percent of cases, the pharmacist made changes to the medication list in the electronic health record based on the patient’s feedback.

 

ONC officials also write that the Open Notes Project, launched in 2010 by Geisinger, the University of Washington's Harborview Medical Center, Beth Israel Deaconess Medical Center and the Robert Wood Johnson Foundation, “found that patients who were given access to their doctors' notes reported they do better in taking their meds.”

 

If patients are going to become effective team members, we need to maximize their potential. mHealth solutions can help remove barriers by providing effective education, the necessary tools for tracking health and the right connectivity with other members of their healthcare team. This would allow the rest of the team to focus on the aspects of care they are uniquely qualified to address.

 

What questions do you have?

 

Lucienne Ide, co-author of this blog post, is CEO of Rimidi.com and Justin Barnes is a Managing Director at Justin Barnes Advisors.

The healthcare industry’s digital transformation calls for shifting the burden of care from the system to the patient. Technology is helping to lead this charge, as evidenced by the growing number of patients who are now able to track their own health information as well as generate data that previously was unavailable to physicians and other care providers. With the 2nd Annual Healthcare Cyber Security Summit this month – and the attack vectors targeting the industry having changed over the past couple years – it’s a good time to revisit the topic.

 

Mobile devices, EMRs, HIEs, cloud computing, telemedicine and other technologies are now common to healthcare settings, incrementally delivering on their promise to stretch resources and lower costs. But along with these new capabilities come new threats to patient data and the organizations responsible for managing it. Such threats are reflected through the rise of HIPAA data breaches from 2012-2013, as well as in the increase of state- and corporate-sponsored cyber attacks targeting medical device makers in 2014. As a recent webinar presented by NaviSite pointed out: the emerging Internet of Things (IoT) also raises the stakes for healthcare organizations, as reflected by Europol’s recent warning about IoT and the FDA’s determination that some 300 medical devices are vulnerable to attack.

 

In April, the FBI issued a sobering notification to healthcare organizations stating that the industry is “…not technically prepared to combat against cyber criminals, basic cyber intrusion tactics, techniques and procedures…” Nor is it ready for some of the more advanced persistent threats facing the industry.

 

It doesn’t help that medical records are considered up to 50 times more valuable on the black market than credit card records.

 

Whether through HIPAA data breaches, malware, phishing emails, sponsored cyber-attacks, or threats surrounding the evolving Internet of Things, the emerging threats in healthcare cannot go unaddressed. Security experts say cyber criminals increasingly are targeting the industry because many healthcare organizations still rely on outdated computer systems lacking the latest security features.

 

With so many mobile and internet-connected devices located in healthcare settings, determining how to secure them should be a top priority. That means developing and implementing strategies that make anti-virus, encryption, file integrity and data management a top priority.

 

Security experts report that, ultimately, data correlation is the key. What is important for healthcare organizations is having a system in place that empowers threat identification, classification, system analysis, and a manual review process that offsets human error, enabling 100 percent certainty regarding potential incidents.

 

With this in mind, how is your organization safeguarding against cyber threats? Do you rely on an in-house cybersecurity team, or has your organization partnered with a managed security service provider for this type of service?

Patient data and analytics are vital to the healthcare experience today. To learn more, we recently caught up with Dr. David J. Cook, professor of anesthesiology at Mayo Clinic, who also has an appointment in the engineering section for the Center of the Science of Healthcare Delivery.


Dr. Cook built MC Health Connection, a cloud-based architecture designed to alter care models and improve the patient experience. Using a tablet, patients, family members and physicians can track their progress with recovery following surgery. In the video below, Dr. Cook shares his thoughts on the three elements for changing care models.

 

Intel: How can wearables and big data work together to improve healthcare?

 

Cook: The first element in the evolution of care is in acquiring data from patients in non-intrusive ways that integrate with their daily lifestyles. We need to give patients the opportunity to share insights into their daily health cycles, which would lead to early detection of disease and ultimately improve the quality of their lives.

 

The second element is connecting patient-generated data to a gateway so that it can inform decisions. Data alone is not enough and the clinical care model is not sufficient unless it has useful and actionable patient health data.

 

The third element is connecting that gateway to a healthcare infrastructure that is accessible to both patients and their healthcare providers. These elements are just beginning to work together to create an intelligent healthcare model.

 

Intel: What can you imagine for the future of healthcare?

 

Cook: We need to shift our thinking and be ready to participate in healthcare models that empower patients to contribute and engage in their own healthcare. The future is shifting away from a passive delivery model to one that focuses on real-time patient engagement. This is probably the fundamental philosophical and social transition that’s going to occur in healthcare.

 

The way we engage with the world is shifting how we live our daily lives—whether that’s in how we bank, plan our travel or decide where to eat or what to buy. It’s reasonable for patients to expect that we deliver healthcare models that connect to modern technologies that can greatly improve their health and longevity.

 

 

Intel: How have patient needs changed in the past 100 years?

 

Cook: In the past, there was a belief that illnesses were just something that happened to patients. Therefore, the responsibility for patient wellness fell entirely on someone who typically didn’t give much thought to preventative care. Now, that model is certainly suitable for acute appendicitis, or typhoid fever, or getting run over by a wagon, but that psychosocial model doesn’t work for diabetes. It doesn’t work for hypertension. It doesn’t work for obesity, which is among the ailments affecting the majority of the patients that we see today. That transition is incredibly important.

 

Intel: How is big data changing your approach to patient care?

 

Cook: Technology has changed what I do tremendously. Technology is radically changing the work experience of physicians; its impact on my own work is extraordinary. I’m an anesthesiologist and I work in cardiac surgery—we get data on multiple physiologic parameters every second. When you have that much data it begins to add amazing amounts of value.

 

The amount of data that we have now provides a remarkable patient safety net. We can now pull data and identify certain patterns that require immediate physician attention. We didn’t have that in the past. This is a completely transformative way of delivering healthcare.

 

Intel: What keeps you up at night?

 

Cook: What keeps me up at night, more than anything else, is frustration at the slow pace forward. What is needed is so absolutely and clearly evident. Yet there seems to be an effort to reach a large comprehensive platform solution, as opposed to creating a variety of smaller solutions that you can test on a relatively small scale. It feels like every week and every month that goes by there’s this pressing need in the United States and elsewhere for cost-effective healthcare that’s of high quality. The way to that is relatively straightforward, I think.

Home healthcare practitioners need efficient, reliable access to patient information no matter where they go, so they need hardware solutions that meet their unique needs. Accessing critical patient information, patient file management, seamless multitasking and locating a patient’s residence, are daily tasks for mobile healthcare professionals. Mobile practitioners don’t have access to the same resources they would if they were working in a hospital, so the tools they use are that much more critical to accomplishing their workload. Fortunately, advances in mobile computing have created opportunities to bridge that gap.

 

An Evolved Tablet For Healthcare Providers

 

As tablets have evolved, they’ve become viable replacements for clunky laptops. Innovation in the mobile device industry has transformed these devices from media consumption platforms and calendar assistants into robust workhorses that run full-fledged operating systems. However, when it comes to meeting the needs of home healthcare providers, not all tablets are created equal.

                 

A recent Prowess Consulting comparison looked at two popular devices with regards to tasks commonly performed by home healthcare workers. The study compared an Apple® iPad Air™ and a Microsoft® Surface™ Pro 3 to determine which device offers a better experience for home healthcare providers, and ultimately, their patients.

 

Multitasking, Done Right

 

One of the biggest advantages to the Surface™ Pro 3 is its ability to let users multitask. For example, a healthcare worker can simultaneously load and display test results, charts, and prescription history via the device’s split-screen capabilities. A user trying to perform the same tasks on the iPad would find themselves running into the device’s limitations; there are no split-screen multitasking options on the iPad Air™.

 

The Surface™ Pro 3’s powerful multitasking abilities combined with the ability to natively run Microsoft Office gives home healthcare providers the ability to focus more time on patient care and less time on administrative tasks. Better user experience, workflow efficiency, file access speed, and split-screen multitasking all point to the Microsoft® Surface™ Pro 3 as the better platform for home healthcare providers.

 

For a full rundown of the Surface™ Pro 3’s benefits to home healthcare workers, click here.

 

What questions about mobile tablets in healthcare do you have?

The growth of mobile healthcare is sometimes staggering to think about. In just a few short years we’ve seen advancements in everything from devices to EHRs to connectivity. While topics such as security, bring-your-own device, and cloud are ever-present, the technology that enables these activities are changing all the time.

 

Mobility is a given as today’s healthcare expands beyond institutions into more home-based and community care settings.  Mobile technology can also help busy clinicians improve quality of care and efficient of care delivery. 

 

Next week’s mHealth Summit 2014 in Washington, D.C., promises to be an engaging event that will address the next wave of mobile healthcare. I’ve seen the growth of this event and am excited to hear the sessions and see the latest devices at the exhibition.

 

Intel will be on hand in booth #303 showing off a number of mhealth tools, including Dell mobile devices and Microsoft mobile apps. In addition, our experts will be participating in a variety of informative conference sessions, including:

 

Sunday, 12/7


mHealth Summit Privacy & Security Symposium, 1:45 – 2:30 pm

Risky Business: Mitigating mHealth Workarounds with “Usable” Security

Healthcare security incidents and breaches have reached alarming frequencies and impacts. Better quality and lower cost healthcare depends on minimizing privacy and security risks and incidents. We need patient care *with* security. Intel Privacy & Security Lead David Houlding will participate as a presenter.

 

Monday, 12/8


Luncheon Panel, 12:30 – 2:00 pm

Going mobile is no longer an option.  Clinicians realize that to drive improvements in clinical efficiency and patient outcomes mobility is required to enable care to be delivered anywhere at any time. It takes the right mobile devices, software, security and improved workflows to successfully deploy a mobile health strategy.  Windows 8 features the state of the art user experience for touch tablets that clinicians demand, and the manageability and security that IT departments require.  Ben Wilson, Director of Mobile Health at Intel Corporation, will lead a panel of the industry's leading healthcare providers in discussing mobile health success stories and why these customers have chosen Windows* 8 as their mobile platform of choice. Speakers: Will Morris, MD, Cleveland Clinic, Bradley Dick, CIO, Resurgens Orthopaedic and Shiv Rao, MD, Cardiologist, University of Pittsburgh Medical Center.

 

Partnerships for the Future of Population Health, 2:30 – 3:30 pm, National Harbor 10-11

This session will address innovative partnerships or projects that are attempting to develop new standards of care or provide insight into diseases/conditions in specific patient populations through novel collaborations for data sharing or analytics. Matt Quinn from Intel and Lona Vincent, Senior Associate Director of Research Partnerships at the Michael J. Fox Foundation will participate.

 

Public mHealth – Insights on Program Development and Implementation, 3:45– 4:45 pm, Room Maryland A

Intel’s Matthew Taylor participates in a session that will examine case studies tackling major public health problems, from childhood obesity, sexually transmitted infections, and child feeding habits to determining the training and technology costs for preparing frontline health workers in mHealth programs.

 

Tuesday, 12/9


Future of Global mHealth, Potomac Ballroom, 9:50 am – 10:15 am

What are the challenges and opportunities for leveraging mobile to deliver healthcare in low-resource environments around the world? Can mobile level the playing field for a more equitable healthcare access and distribution of healthcare resources in the future? In this fireside chat, Lester Russell, senior director for health and life sciences for Intel in EMEA, will discuss key issues shaping the future of global mHealth, such as scalability, market opportunities, policy, key technologies, infrastructure, and the role of public-private partnerships.

 

Wednesday, 12/10


Pharma Roundtable, 11:45 am – 4:00 pm, Potomac 1-2

Intel’s Matt Quinn participates in the Second Annual mHealth Summit Pharmaceutical, Pharmacy and Life Sciences Roundtable, which is dedicated to an open exchange of cross-sector insights for advancing outcomes-driven mobile and connected health strategies and reducing barriers to adoption. The Roundtable seeks to identify opportunities for collaboration and commitment to the development of high-impact mobile and connected health initiatives, which foster patient and caregiver involvement, facilitate informed and shared decision making, and demonstrate improvements in treatment, care and outcomes.

 

We look forward to seeing you at mHealth 2014. What questions about mobile healthcare technology do you have?

 

Based on what we heard at Supercomputing last month, it’s clear that bio IT research is on the fast track and in search of more robust compute power.

 

In the above video, Michael J. Riener, Jr., president of RCH Solutions, talks about dynamic changes coming to the bio IT world in the next 24 months. He says that shrinking budgets in research and development means that more cloud applications and service models will be implemented. When it comes to big data, next generation sequencing will heighten the need to analyze data, determine what data to keep and what to discard, and how to process it.

 

Watch the clip and let us know what questions you have. What changes do you want to see in bio IT research?

Frustration with electronic health record (EHR) systems notwithstanding, the data aggregation processes that have grown out of healthcare’s adoption of the electronic health record are now spawning analytical capabilities that were unthinkable just 15 years ago. By leveraging big data to track everything from patient recovery rates to hospital finances, healthcare organizations are capturing and storing data sets that are changing the way doctors, caregivers and payers tackle larger scale health issues.

 

It’s not just happening on the clinical side, either, where EHRs are extending real-time patient information to doctors and predictive analytics are helping physicians to better track and understand their patients' medical conditions.

 

In Kentucky, for example, tech investments by the state’s largest provider systems are estimated at over $600 million, a number that doesn’t even reflect investments from two of the biggest local organizations, Baptist Health and University of Kentucky HealthCare. The data collected by these hospitals includes—and far exceeds—the EMR basics mandated under ARRA, according to an article in The Lane Report.

 

While the goal of improving quality of care is, of course, a key driver of such investments, so is the government mandate tying Medicare and Medicaid reimbursement to outcomes. According to a recent report from McKinsey & Company, more than 50 percent of doctors’ offices and almost 75 percent of hospitals nationwide are managing patient information electronically. So, it’s not surprising that big data is catching the attention of healthcare’s management teams.

 

By quantifying and analyzing an endless variety of metrics—including things like R&D, claims, costs, and insights gleaned from patients—the industry is refining its approach to both preventative care and treatment, and saving money in the process. A good example can be found in the analysis of data surrounding regression rates, which some hospitals are now using to stave off premature releases and, by extension, exorbitant penalties.

 

Others, such as Brigham and Women’s Hospital, already are applying algorithms to generate savings beyond readmissions, in areas that include: high-cost patients, triage, decompensation, adverse events, and treatment optimization.

 

While there’s room to debate the extent to which big data is improving patient outcomes—or the scope of savings attributable to big data initiatives given the associated system costs—the trend toward leveraging data for better outcomes and savings will only continue to grow as CIOs advance meaningful implementations of solutions, and major technology companies continue to expand the industry’s basket of options.

 

How is your healthcare organization applying big data to overcome challenges? Have the results proven worthwhile?

 

As a B2B journalist, John Farrell has covered healthcare IT since 1997 and is a sponsored correspondent for Intel Health & Life Sciences.

Read John’s other blog posts

Clinicians are on the front lines when it comes to using healthcare technology. To get a doctor’s perspective on health IT, we caught up with Dr. Sandhya Pruthi, medical director for patient experience, breast diagnostic clinic, at Mayo Clinic Rochester, for her thoughts on telemedicine and the work she has been undertaking with remote patients in Alaska.

 

sandhya-pruthi-11254262.jpg

 

Intel: How are you involved in virtual care?

 

Pruthi: I have a very personal interest in virtual care. I have been providing telemedicine care to women in Anchorage, Alaska, right here from my telemedicine clinic in Rochester, Minnesota. I have referrals from providers in Anchorage who ask me to meet their patients using virtual telemedicine. We call it our virtual breast clinic, and we’ve been offering the service twice a month for the past three years.

 

Intel: What services do you provide through telemedicine?

 

Pruthi: We know that in some remote parts of the country, it’s hard to get access to experts. What I’ve been able to provide remotely is medical counseling for women who are considered high risk for breast cancer. I remotely counsel them on breast cancer prevention and answer questions about genetic testing for breast cancer when there is a very strong family history. The beauty is that I get to see them and they get to see me, rather than just writing out a note to their provider and saying, “Here’s what I would recommend that the patient do.”

 

Intel: How have patients and providers in Alaska responded to telemedicine?

 

Pruthi: We did a survey and asked patients about their experience and whether they felt that they received the care they were expecting when they came to a virtual clinic. The result was 100 percent satisfaction by the patients. We also surveyed the providers and asked if their needs were met through the referral process. The results were that providers said they were very pleased and would recommend the service again to their patients.

 

Intel: Where would you like to see telemedicine go next?

 

Pruthi: The next level that I would love to see is the ability to go to the remote villages in the state of Alaska, where people have an even harder time coming to a medical center. I’d also like to be able to have a pre-visit with patients who may need to come in for treatment so we can better coordinate their care before they arrive.

 

Intel: When it comes to telemedicine, what keeps you up at night?

 

Pruthi: Thinking about how we can improve the patient experience. I really feel that for a patient who is dealing with an illness, the medical experience should wow them. It should be worthwhile to the patient and it should follow them on their entire journey—when they make their appointment, when they meet with their physician, when they have tests done in the lab, when they undergo procedures. Every step plays a role in how they feel when they go home. That’s what we call patient-centered care.

This guest blog is by Sanchit Misra, Research Scientist, Intel Labs, Parallel Computing Lab, who will be presenting a paper by Intel and Georgia Tech this week at SC14.

 

Did you know that the process of winemaking relies on yeast optimizing itself for survival? When we put yeast in a sugar solution, it turns on genes that produce the enzymes that convert sugar molecules to alcohol. The yeast cell makes a living from this process (by gaining energy to multiply) and humans get wine.

 

This process of turning on a gene is called expression. The genes that an organism can express are all encoded in its DNA. In multi-cellular organisms like humans, the DNA of each cell is the same, but cells in different parts of the body express different genes to perform the corresponding functions. A gene also interacts with several other genes during the execution of a biological process. These interactions, modeled mathematically using “gene networks,” are not only essential in developing a holistic understanding of an organism’s biological processes, they are invaluable in formulating hypotheses to further the understanding of numerous interesting biological pathways, thus playing a fundamental role in accelerating the pace and diminishing the costs of new biological discoveries. This is the subject of a paper presented at the SC14 by Intel Labs and Georgia Tech.

 

Owing to the importance of the problem, numerous mathematical modeling techniques have been developed to learn the structure of gene networks. There appears, not surprisingly, to be a correlation between the quality of learned gene networks and the computational burden imposed by the underlying mathematical models. A gene network based on Bayesian networks is of very high quality but requires a lot of computation to construct. To understand Bayesian networks, consider the following example.

 

A patient visits a doctor for diagnosis with symptoms A, B and C. The doctor says that there is a high probability that the patient is suffering from ailments X or Y and recommends further tests to zero in on one of them. What the doctor does is an example of probabilistic inference, in which the probability that a variable has a certain value is estimated based on the values of other related variables. Inference that is based on Bayes’ laws of probability is called Bayesian inference. The relationships between variables can be stored in the form of a Bayesian network. Bayesian networks are used in a wide range of fields including science, engineering, philosophy, medicine, law, finance, etc. In the case of gene networks, the variables are genes and the corresponding Bayesian network models for each gene what other genes are related to it and what is the probability of expression of the gene given the expression values of the related genes.

 

Through a collaboration between Intel Labs’ Parallel Computing Lab and researchers at Georgia Tech and IIT Bombay, we now have the first ever genome-scale approach for construction of gene networks using Bayesian network structure learning. We have demonstrated this capability by constructing the whole-genome network of the plant Arabidopsis thaliana from over 168.5 million gene expression values by computing a mathematical function 7.3 trillion times with different inputs. For this, we collected a total of 11,760 Arabidopsis gene expression datasets (from NASC, AtGenExpress and GEO public repositories). A problem of this scale would have consumed about six months using the state-of-the-art solution. We can now solve the same problem in less than 3 minutes!

 

To achieve this, we have not only scaled the problem to a much bigger machine - 1.5 million cores of Tianhe-2 supercomputer with 28 PFLOP/s peak performance, we also applied algorithm-level innovations including avoiding redundant computation, a novel parallel work decomposition technique and dynamic task distribution. We also made implementation optimizations to extract maximum performance out of the underlying machine.

 

sanchit image3.jpg

 

sanchit image 2.jpg

 

  • (Top)    Root Development subnetwork                 (Bottom) Cold Stress subnetwork

 

Using our software, we generated gene regulatory networks for several datasets - subsets of the Arabidopsis dataset - and validated them using known knowledge from the TAIR (The Arabidopsis Information Resource) database. As a demonstration of the validity and how genome-scale networks can be used to aid biological research, we conducted the following experiment. We picked the genes that are known to be involved in root development and cold stress and randomly picked a subset of those genes (red nodes in the above figures). We took the whole-genome network generated by our software for Arabidopsis and extracted subnetworks that contain our randomly picked subset of genes and all the other genes that are connected to them. The extracted subnetworks contain a rich presence of other genes known to be in the respective pathways (green nodes) and closely associated pathways (blue nodes), serving as a validation test. The nodes shown in yellow are genes with no known function. Their presence in the root development subnetwork indicates they might function in the same pathway. The biologists at Georgia Tech are performing experiments to see if the genes corresponding to yellow nodes are indeed involved in root development. Similar experiments are being conducted for several other biological processes.

 

Arabidopsis is a model plant for which NSF had launched a 10 year initiative in 2000 to find the functions of all of its genes, yet the functions of 40 percent of its genes are still not known. This method can help accelerate the discovery of the functions of the rest of the genes. Moreover, it can easily be scaled to other species including human beings. The understanding of how genes function and interact with each other in a broad variety of organisms can pave the way for new medicines and treatments. Moreover, we can also compare the gene networks across organisms to enhance our understanding of the similarities and differences between them ultimately aiding in a deeper understanding of evolution.

 

What questions do you have?

 

With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.

 

Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?

 

What questions about HPC do you have? 

 

If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

More than eight months have now passed since Illumina announced the long expected arrival of the $1,000 genome with the launch of the HiSeq X Ten sequencing instrument, which is also denoted as a new era in High Throughput Sequencing with focus on a new wave of population-level genomic studies. Intel_BFS_Jenna_Hartley_20140328.Still006.jpg

 

The magic $1,000 genome

In order to keep the costs down to the “magic” $1,000 level, it is required to have a full HiSeq X Ten installation plow through vast 18,000 full human genomes per year, which means a completion of each full run every 32 minutes. With focus on such a high volume, the next very important question arrives:

 

What does it take to keep up with such a high throughput on the data analysis side?

 

According to Illumina’s “HiSeq X Ten Lab Setup and Site Prep Guide (15050093 E)”, the requirements for data analysis are specified to be a compute cluster with 134 compute nodes (16 CPU cores @ 2.0 GHz, 128 GB of memory, 6 x 1 terabyte (TB) hard drives) based on an analysis pipeline consisting of the tools BWA+GATK.

 

At QIAGEN Bioinformatics we decided to take on the challenge of benchmarking this, based on a workflow (Trim, QC for sequencing reads, Read Mapping to Reference, Indels and Structural Variants, Local Re-alignment, Low Frequency Variant Detection, QC for Read Mapping) of tools on CLC Genomics Server  (http://www.clcbio.com/products/clc-genomics-server/) running on a compute cluster with Intel® Enterprise Edition for Lustre* filesystem, InfiniBand, Intel® Xeon® Processor E5-2697 v3 @ 2.60GHz, 14 CPU cores, 64GB of memory, and Intel® SSD DC S3500 Series 800GB.

 

We based our tests on a publicly available HiSeq X Ten dataset  and we have reached the conclusion that based on these specifications we can follow the pace of the instrument with a compute cluster of only 61 compute nodes.

 

Lower cost of ownership

Given our much lower compute node needs, these results can have a significant positive impact on the total cost of ownership of the compute infrastructure for a HiSeq X Ten customer, which includes hardware, cooling, space, power, and systems maintenance to name a few variable costs.

 

What questions do you have?

 

Mikael Flensborg is Director, Global Partner Relations at CLC bio, a Qiagen Company.

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Eldon M. Walker, Ph.D., Director, Research Computing at Cleveland Clinic's Lerner Research Institute. During SC14, Eldon will be sharing his thoughts on implementing a high performance computing cluster at the Intel booth (#1315) on Tuesday, Nov. 18, at 10:15 a.m. in the Intel Theater.


When data analyses grind to a halt due to insufficient processing capacity, scientists cannot be competitive. When we hit that wall at the Cleveland Clinic Lerner Research Institute, my team began consideration of the components of a solution, the cornerstone of which was a high performance computing (HPC) deployment.

 

In the past 20 years, the Cleveland Clinic Lerner Research Institute has progressed from a model of wet lab biomedical research that produced modest amounts of data to a scientific data acquisition and analysis environment that puts profound demands on information technology resources. This manifests as the need for the availability of two infrastructure components designed specifically to serve biomedical researchers operating on large amounts of unstructured data:

 

  1. A storage architecture capable of holding the data in a robust way
  2. Sufficient processing horsepower to enable the data analyses required by investigators

 

Deployment of these resources assumes the availability of:

 

  1. A data center capable of housing power and cooling hungry hardware
  2. Network resources capable of moving large amounts of data quickly

 

These components were available at the Cleveland Clinic in the form of a modern, tier 3 data center and ubiquitous 10 Gb / sec and 1 Gb / sec network service.

 

The storage problem was brought under control by way of 1.2 petabyte grid storage system in the data center that replicated to a second 1.2 petabyte system in the Lerner Research Institute server room facility. The ability to store and protect the data was the required first step in maintaining the fundamental capital (data) of our research enterprise.

 

It was equally clear to us that the type of analyses required to turn the data into scientific results had overrun the capacity of even high end desktop workstations and single unit servers of up to four processors. Analyses simply could not be run or would run too slowly to be practical. We had an immediate unmet need in several data processing scenarios:

 

  1. DNA Sequence analysis
    1. Whole genome sequence
      1. DNA methylation
    2. ChIP-seq data
      1. Protein – DNA interactions
    3. RNA-seq data
      1. Alternative RNA processing studies
  2. Finite Element Analysis
    1. Biomedical engineering modeling of the knee, ankle and shoulder
  3. Natural Language Processing
    1. Analysis of free text electronic health record notes

 

There was absolutely no question that an HPC cluster was the proper way to provide the necessary horsepower that would allow our investigators to be competitive in producing publishable, actionable scientific results. While a few processing needs could be met using offsite systems where we had collaborative arrangements, an internal resource was appropriate for several reasons:

 

  1. Some data analyses operated on huge datasets that were impractical to transport between locations.
  2. Some data must stay inside the security perimeter.
  3. Development of techniques and pipelines would depend on the help of outside systems administrators and change control processes that we found cumbersome; the sheer flexibility of an internal resource built with responsive industry partners was very compelling based on considerable experience attempting to leverage outside resources.
  4. Given that we had the data center, network and system administration resources, and given the modest price-point, commodity nature of much of the HPC hardware (as revealed by our due diligence process), the economics of obtaining an HPC cluster were practical.

 

Given the realities we faced and after a period of consultation with vendors, we embarked on a system design in collaboration with Dell and Intel. The definitive proof of concept derived from the initial roll out of our HPC solution is that we can run analyses that were impractical or impossible previously.

 

What questions do you have? Are you at the point of considering an internal HPC cluster?

As SC14 approaches next week, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Charles Shiflett, senior software engineer at Aspera. Charles will be sharing his thoughts on ultra high speed WAN transfers during SC14 at the Intel booth (#1315) on Tuesday, Nov. 18 at 12:30 p.m. in the Intel Theater and on Wednesday at 2 p.m. in the Intel Community Hub.

 

Research is running into a problem where the amount of data that is being generated is growing faster than what can be analyzed and stored using traditional tools and architectures. This has led to an explosion of technologies and tools in processing data that take advantage of the parallelism inherent both in compute clusters and/or cloud environments. What hasn't improved at a commensurate rate is improvements to performance in storage or network throughput. hwcfg.jpg

 

The reason storage and network throughput hasn't improved is in part a sense of complacency. In our traditional computing model, people think of internet (or network) as slow, disks as somewhat faster, and memory as screaming fast. This is completely wrong in an HPC environment, but it is the world we are stuck in when go through traditional interfaces which were designed with this paradigm in mind. Achieving breakthrough performance requires a new approach.

 

Using commodity Intel® hardware we were able to develop a novel solution (termed next generation Aspera FASP) which bypasses traditional kernel layers in Storage and IO. While this solution is still under development we have already been able to show over 90 percent utilization of a transfer solution using two 40 Gbit/s network cards for a total of 80 Gbit/s (see photo above). This equates to disk to disk performance at about 8 GB/s or the ability to transfer 1TB of data in about 2 minutes. At Super Computing 14, Aspera will be showcasing next generation FASP in operation over a WAN environment where we will transfer 250 GB of data in about 20 seconds from the show floor to Chicago and back.

 

As the developer of this solution, what excites me most is the benefit our customers will get from a high speed block based transfer solution that not only solves WAN transfer needs but does it in a way that is secure (every packet is encryped with AES-128), runs on commodity Intel hardware, and is equally applicable in both LAN and WAN environments. Our future plans are to provide this solution in a way that integrates with high performance compute packages (such as Spark or Hadoop), high performance storage (think Lustre or GPFS), while continuing to build upon the Intel and IBM Aspera technologies that have made this solution possible.

 

What questions do you have?

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Ari E. Berman, Ph.D., Director of Government Services and Principal Investigator at BioTeam, Inc. Ari will be sharing his thoughts on high performance infrastructure and high speed data transfer during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 2 p.m. in the Intel Community Hub and at 3 p.m. in the Intel Theater.


There is a ton of hype these days about Big Data, both in what the term actually means, and what the implications are for reaching the point of discovery in all that data.

 

The biggest issue right now is the computational infrastructure needed to get to that mythical Big Data discovery place everyone talks about. Personally, I hate the term Big Data. The term “big” is very subjective and in the eye of the beholder. It might mean 3PB (petabytes) of data to one person, or 10GB (gigabytes) to someone else. ariheadshot52014-sized.jpg

 

From my perspective, the thing that everyone is really talking about with Big Data is the ability to take the sum total of data that’s out there for any particular subject, pool it together, and perform a meta-analysis on it to more accurately create a model that can lead to some cool discovery that could change the way we understand some topic. Those meta-analyses are truly difficult and, when you’re talking about petascale data, require serious amounts of computational infrastructure that is tuned and optimized (also known as converged) for your data workflows. Without properly converged infrastructure, most people will spend all of their time just figuring out how to store and process the data, without ever reaching any conclusions.

 

Which brings us to life sciences. Until recently, life sciences and biomedical research could really be done using Excel and simple computational algorithms. Laboratory instrumentation really didn’t create that much data at a time, and it could be managed with simple, desktop-class computers and everyday computational methods. Sure, the occasional group was able to create enough data that required some mathematical modeling or advanced statistical analysis or even some HPC, and molecular simulations have always required a lot of computational power. But, in the last decade or so, the pace of advancement of laboratory equipment has left large swath of overwhelmed biomedical research scientists in the wake of the amount of data being produced.

 

The decreased cost and increased speed of laboratory equipment, such as next-generation sequencers (NGS) and high-throughput high-resolution imaging systems, has forced researchers to become very computationally savvy very quickly. It now takes rather sophisticated HPC resources, parallel storage systems, and ultra-high speed networks to process the analytics workflows in life sciences. And, to complicate matters, these newer laboratory techniques are paving the way towards the realization of personalized medicine, which carries the same computational burden combined with the tight and highly subjective federal restrictions surrounding the privacy of personal health information (PHI).  Overcoming these challenges has been difficult, but very innovative organizations have begun to do just that.

 

I thought it might be useful to very briefly discuss the three major trends we see having a positive effect on life sciences research:

 

1. Science DMZs: There is a rather new movement towards the implementation of specialized research-only networks that prioritize fast and efficient data flow over security (while still maintaining security), also known as the Science DMZ model (http://fasterdata.es.net). These implementations are making it easier for scientists to get around tight enterprise networking restrictions without blowing the security policies of their organizations so that scientists can move their data effectively without ******* off their compliance officers.


2. Hybrid Compute/Storage Models: There is a huge push to move towards cloud-based infrastructure, but organizations are realizing that too much persistent cloud infrastructure can be more costly in the long term than local compute. The answer is the implementation of small local compute infrastructures to handle the really hard problems and the persistent services, hybridized with public cloud infrastructures that are orchestrated to be automatically brought up when needed, and torn down when not needed; all managed by a software layer that sits in front of the backend systems. This model looks promising as the most cost-effective and flexible method that balances local hardware life-cycle issues with support personnel, as well as the dynamic needs of scientists.


3. Commodity HPC/Storage: The biggest trend in life sciences research is the push towards the use of low-cost, commodity, white box infrastructures for research needs. Life sciences has not reached the sophistication level that requires true capability supercomputing (for the most part), thus, well-engineered capacity systems built from white-box vendors provide very effective computational and storage platforms for scientists to use for their research. This approach carries a higher support burden for the organization because many of the systems don’t come pre-built or supported overall, and thus require in-house expertise that can be hard to find and expensive to retain. But, the cost balance of the support vs. the lifecycle management is worth it to most organizations.

 

Biomedical scientific research is the latest in the string of scientific disciplines that require very creative solutions to their data generation problems. We are at the stage now where most researchers spend a lot of their time just trying to figure out what to do with their data in the first place, rather than getting answers. However, I feel that the field is at an inflection point where discovery will start pouring out as the availability of very powerful commodity systems and reference architectures come to bear on the market. The key for life sciences HPC is the balance between effectiveness and affordability due to a significant lack of funding in the space right now, which is likely to get worse before it gets better. But, scientists are resourceful and persistent; they will usually find a way to discover because they are driven to improve the quality of life for humankind and to make personalized medicine a reality in the 21st century.

 

What questions about HPC do you have?

What better place to talk life sciences big data than the Big Easy? As temperatures are cooling down this month, things are heating up in New Orleans where Intel is hosting talks on life sciences and HPC next week at SC14. It’s all happening in the Intel Community Hub, Booth #1315, so swing on by and hear about these topics from industry thought leaders:

 

Think big: delve deeper into the world’s biggest bioinformatics platform. Join us for a talk on the CLC bio enterprises platform, and learn how it integrates desktop interfaces with high performance cluster resources. We’ll also discuss hardware and explore the scalability requirements needed to keep pace with the Illumina HiSeq X-10 sequencer platform, and with a production cluster environment based on Intel® Xeon® processor E5-2600 V3. When: Nov. 18, 3-4 p.m.

 

Special Guests:

Lasse Lorenzen, Head of Platform & Infrastructure, Qiagen Bioinformatics;

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

 

Find out how HPC is pumping new life into the Living Heart Project. Simulating diseased states, and personalizing medical treatments, requires significant computing power. Join us for the latest updates on the Living Heart Project, and learn how creating realistic multiphysics models of human hearts can lead to groundbreaking approaches to both preventing and treating cardiovascular disease. When: Nov. 19, 1-2 p.m.

 

Special Guest: Karl D’Souza, Business Development, SIMULIA Asia-Pacific

 

Get in sync with scientific research data sharing and interoperability. In 1989, the quest for global scientific collaboration helped lead to the birth of what we now call the Internet. In this talk, Aspera and BioTeam will discuss where we are today with new advances in global scientific data collaboration. Join them for an open discussion exploring the newest offerings for high-speed data transfer across scientific research environments. When: Nov. 19, 2-3 p.m.

 

Special Guests:

Ari E. Berman, PhD, Director of Government Services and Principal Investigator, BioTeam;

Aaron Gardner, Senior Scientific Consultant, BioTeam;

Charles Shiflett, Software Engineer, Aspera

 

Put cancer research into warp speed with new informatics technology. Take a peak under the hood of the world’s first comprehensive, user-friendly, and customizable cancer-focused informatics solution. The team from Qiagen Bioinformatics will lead a discussion on CLC Cancer Research Workbench, a new offering for the CLC Bio Cancer Genomics Research Platform. When: Nov. 19, 3-4 p.m.

 

Special Guests:

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

 

You can see more Intel activities planned for SC14 here.

 

What are you looking forward to seeing at SC14 next week?

Filter Blog

By date:
By tag: