1 2 3 Previous Next

Intel Health & Life Sciences

361 posts

The “Internet of Things” (IoT) has exciting near-term prospects in healthcare.  But what does that mean, and how can we most efficiently realize its potential?


Healthcare IoT can take many forms.  Here, we’re referring to sensors deployed onto or inside a human body, that send their data readings to the cloud, which then communicates processed data to clinicians for action.


It sounds straightforward, especially if you’re a technologist, because most of the words in the previous sentence are technology words: “sensor,” “data,” “cloud,” “communicate,” and “process.”


But notice that other word: “action.”  It’s the last word because it’s the system’s entire reason for being.  If you’re designing your IoT system, and you aren’t clear idea what the actions are, how well they work, and, crucially, how the data are tied to the actions, then pause.


What’s Being Tried?


Let’s take an example: the recently published BEAT-HF study of heart failure patients.  All patients got their usual care, but half were randomly selected to additionally get coaching telephone calls plus an IoT solution that acquired daily blood pressure, weight, and oxygen saturation – exactly the parameters cardiologists follow in their heart failure patients.


Unfortunately, the trial showed no benefit of the IoT solution.  Compared to the control group, the IoT patients died just as often, and they came into the hospital just as often.  This is not the first trial to show such failures, and it is fortunate that BEAT-HF did not harm the subjects by wasting physician time and distracting them from interventions that could actually benefit patients.


A Better Mouse-Trap


But now let’s look at a different system, also aimed at heart failure patients.  Here, a small Bluetooth-enabled pressure sensor is placed into the pulmonary artery via catheter.  (Pressure in the pulmonary artery is a key indicator of heart failure.)  Once a day the patients lies quietly in bed, near a Bluetooth receiver, and the sensor’s measurements of pulmonary artery pressure are sent to the cloud, and then to the cardiologist’s office.


In a randomized study of 550 patients, the patients who received the pressure sensor had their medications changed by the cardiologist 250% more times than the control group.  That is not a typo – 250% -- a remarkable change in the “action” step. But did all that extra “action” help? Yes!  Patients with the pressure system experienced 43% fewer deaths, and 57% fewer heart failure hospital admissions.  The word “spectacular” underestimates this accomplishment, especially given the statistics that, among fee-for-service Medicare enrollees, heart failure is responsible for 39% of all deaths, and for 42% of all hospital admissions.




If you are designing an IoT system for healthcare, what lessons can you draw?


  • (1) Sensor choice matters.  A lot. Try to obtain data from the core of the disease process, not peripheral or indirect indicators.
  • (2) Merely increasing the data collection frequency, as BEAT-HF tried, may not be beneficial. “Big data” is not a panacea.  Data quantity may not make up for only marginal improvements in data quality.
  • (3) Patient choice matters.  BEAT-HF failed in its general population of heart failure patients, but might have succeeded with certain subgroups of patients.  For example, patients having both heart failure and depression might disproportionally benefit from the Hawthorne effect (increased attention) that telemonitoring can provide.
  • (4) Test your system with a randomized trial.  It is increasingly clear that other study designs are unreliable when evaluating tele-health systems.


Although technology terms may dominate the definition of a healthcare IoT system, the single clinical word dominates its success.

Ransomware has reached headlines lately with several healthcare organizations globally falling victim, as seen in As Ransomware Crisis Explodes, Hollywood Hospital Coughs Up $17,000 In Bitcoin. Breaches are top of mind in healthcare as far as security and privacy, and within many types of breaches ransomware is the highest priority across most healthcare organizations I have worked with over the last six months.


Compliance with regulations, laws and standards is important, but increasingly organizations realize they need to go well beyond basic regulatory compliance to effectively mitigate risk of breaches, and they are motivated up to the board level with the strong desire to not be the next breach or ransomware victim and headline. Ransomware.jpg


While most security concerns to date have revolved around breaches of confidentiality, or unauthorized access to patient information, ransomware is not a breach of confidentiality, but rather of availability. In security speak, “availability” is timely and reliable access to patient information. Ransomware prevents access to patient information by encrypting this information and withholding the decryption key until a ransom is paid. Exacerbating this, paying a ransom is no guarantee of provision of the decryption key.


As we have seen, this can compromise mission critical services to where hospitals need to turn patients away. Healthcare is particularly vulnerable to this type of breach because they are generally lagging other verticals in security, and have a very low tolerance for disruption. I suspect this problem is a lot worse than most people realize because many ransomware infections go unreported, as many countries lack breach notification rules, or those rules cover compromise to confidentiality, but not availability as in the case of ransomware.


A real danger in securing against this type of breach is the tendency to gravitate to one particular safeguard, such as backup and restore, which while important is just one of many things you can do to secure yourself against ransomware. In this blog, I explore several different safeguards you should consider as part of your holistic, multi-layered, defense-in-depth approach in securing against ransomware. None of these alone is a panacea. Together they represent a very effective, holistic, multi-layered, defense-in-depth security posture against ransomware.


  1. Policy: ransomware often starts with employee actions and mistakes. Examples include clicking malicious links in emails or websites, opening email attachments, plugging in malware infected removable storage devices such as USB keys and so forth. Policy governs employee actions. Is your policy accurate, complete and up to date, especially as it pertains to employee actions that can lead to ransomware infections?
  2. Audit and Compliance: policy is a critical foundation of your security practice. To ensure employees are following it you need audit and compliance, in particular to ensure employee compliance with policy in the areas that could lead to ransomware infection.
  3. Risk Assessment: risk assessment is a key tool to identify risks to confidentiality, integrity and availability of patient information, including for risks such as ransomware. You can prioritize risks by impact and probability of occurrence, triage the top risks and address them through application of safeguards. The business impact of ransomware goes well beyond the ransom that may be paid since it can disrupt your mission critical business systems and processes and effectively halt your business.
  4. Anti-malware: having a good anti-malware solution installed on all endpoints, updated and effective is key in detection and remediation, for example quarantine, of malware including ransomware. You will not catch all ransomware this way, but many, especially older variants, will be caught.
  5. User Awareness Training: most ransomware infections start with employee actions. Training can help employees detect and avoid actions that could lead to infections. Again, not a perfect safeguard, but important in your overall anti-ransomware defense. Spear phishing training is particularly important to include in your overall training program.
  6. Email Gateway: email is a key ransomware infection vector, with spear phishing emails containing malicious links coaxing employees to click them, in which case a drive-by-download and infection of ransomware can result. Your email gateway can oversee emails and detect and block many of these.
  7. Web Gateway: web browsing (and clicking) is another key infection vector, with employees visiting websites and inadvertently clicking on malicious links that cause ransomware infections, again by drive-by-downloads. A good web gateway can detect many such websites, and help block these types of infections.
  8. Vulnerability Management and Patching: vulnerable devices and software create openings for malware and ransomware infections. A good vulnerability management program can identify vulnerabilities, for example in old, unpatched, or misconfigured software, and proactively remediate such vulnerabilities to block ransomware.
  9. Security Incident Response Plan: in the event of an infection such as ransomware, how your organization responds is key to faster resolution and minimizing business impact. Having a good, tested plan that employees can execute to quickly and efficiently, with good coordination, is key to enabling this. This plan should include PR and communications for breach notification if needed.
  10. Backup and Restore: currently the “safeguard du jour” for ransomware, backup and restore is critical. Have it, use it (everywhere you have data), test it (test restore regularly), and make sure it is versioned, and some versions air-gapped with offline backup archives. Ransomware may get into your backups too, depending on when it occurs in your backup cycle, and how quickly you detect it and stop it, but if you have versioning and / or an air-gapped backup then you will have a workable backup version to restore. Keep in mind this is not a panacea though, since rolling back to a previous backup version effectively undoes updates since then, and missing patient information updates can translate into direct risks to patient safety and business impact. This is why backup and restore is necessary but not sufficient. It is far preferable to avoid ransomware in the first place.
  11. Device Control: this is the ability to enforce policy regarding removable storage. For example if an employee plugs in a ransomware infected removable storage device such as a USB key, this safeguard can enforce policy preventing ransomware jumping from the device to your IT network.
  12. Penetration Testing and Vulnerability Scanning: as seen in FBI raises alarm over ransomware targeting U.S. businesses ransomware can enter your network through vulnerable or unpatched software, especially software facing the external Internet. Proactive penetration testing such external facing applications and interfaces to identify and remediate such vulnerabilities is key to mitigating risk of this type of ransomware infection.
  13. Endpoint DLP: Data Loss Prevention software running on endpoint devices can enforce policy and help prevent user actions that can lead to malware infections such as ransomware.
  14. Network Segmentation: segmenting your network can help quarantine or localize any malware infections to prevent propagation across your network. This can limit the extent of infection, lessening business impact, and enabling faster resolution.
  15. Network IPS: a network Intrusion Prevention System can monitor network traffic to detect and prevent malicious activity, such as that which could lead to a ransomware infection.
  16. Whitelisting: useful on endpoint devices, whitelisting limits which applications can execute to a small list of approved applications. If ransomware was to get onto a machine with whitelisting it would be benign on that machine since it is not on the approved list of applications and therefore blocked from executing, and therefore unable to encrypt any patient information. This type of safeguard can be particularly useful on medical devices that don’t get patched or updated frequently.
  17. Network DLP: this type of DLP runs on a network and can enforce policy, including detection and prevention of network interactions and traffic that could lead to ransomware infection.
  18. Digital Forensics: in the event of an infection, digital forensics can help identify the type of ransomware, the extent of infection, and how it occurred, which are key to reducing business impact, and preventing future infections.
  19. SIEM: Security Information and Event Management can help provide realtime analysis of security alerts from across your applications and network, enabling faster detection and remediation of ransomware.
  20. Threat Intelligence Exchange: this can enable realtime exchange of threat information between safeguards in your network, and a global threat intelligence backbone from your security provider(s), helping orchestrate defense against ransomware. This is a critical part of the “immune response” of your organization to ransomware, which will help stop it and kill it as fast as possible.
  21. Business Continuity and Disaster Recovery: as we have seen some recent high profile ransomware infections have essentially shutdown the information technology systems of healthcare organizations, crippling mission critical business processes to the point where they had to send patients elsewhere. Having a good BC / DR capability with mirroring of data and hot standby can be helpful in keeping mission critical systems going while remediation is occurring. The effectiveness of this safeguard against ransomware depends on ransomware not propagating to your hot standby system, as can be prevented by various safeguards discussed previously.


No organization wants to be “at the back of the herd” or “low hanging fruit” for attacks such as ransomware. It has been difficult in the past for healthcare organizations to measure or benchmark their breach security against the rest of the healthcare industry. It is one thing having a gap in your safeguards if everyone else has that gap. However, if you have a gap and most others don’t then you could be relatively vulnerable.


Intel Health and Life Sciences and several industry partners are currently conducting complementary, confidential breach security assessments for provider, payer, pharma and life sciences organizations globally. Through this one hour engagement healthcare organizations are able to benchmark their breach security across 42 safeguard capabilities and 8 different types of breaches, including ransomware, against the rest of the industry to see what percentile they are in terms of readiness, and gaps and opportunities for improvement they may have. To find out more contact BreachSecurity@Intel.com


Following Bio-IT World, we’re asking some of the world’s top researchers how next generation sequencing (NGS) benefits them and their work. Today, we catch up with Mayo Clinic expert David I Smith, Ph. D, who says NGS allows him to ask and answer questions in a surprisingly short amount of time. The real value, he points out, is that sequencing gives researchers the ability to look at trillions of molecules to see what is happening to populations and move research discoveries, particularly in cancer, forward.


Watch the above video to learn more and discover how cancer treatments will be dramatically different five years from now, and what keeps Smith up at night when it comes to NGS.

Realizing the potential in big data is a challenge we’re enthusiastically tackling head on here at Intel and a recently announced strategic partnership with the Alan Turing Institute (ATI) in the UK is just one example of where working with key partners can help us drive scientific and technological discoveries.


We want to help turn the rapidly increasing volume of data into meaningful insights which will help solve global challenges across a number of areas, including health and life sciences. The ATI’s vision is an exciting proposition, and that is to be a national institute which supports the UK in becoming a world leader in data science, through:


  • Research into the fundamentals of algorithms for data science;
  • Training the next generation of researchers;
  • Addressing ways in which scientific advances can be taken into practice;
  • Collaborating with a range of public and private organizations.


If you want the deep dive on the ATI’s forward looking vision, I’d highly recommend reading Andrew Blake’s (Institute Director) Alan Turing Institute Roadmap for Science and Innovation.


Alan Turing is a name that is familiar to many of you I’m sure and as the person who many see as the founder of modern computer science we are delighted that new algorithms developed by the ATI will feed into the design of future generations of Intel® microprocessors. Intel will provide the ATI with world-class High Performance Computing solutions including Intel® Xeon®-based workstations, Intel Software tools and access to an Intel Data center cluster based on Intel® Xeon® and Intel® Xeon Phi™.


People and Technology

But great technology is just one part of the story of Intel’s strategic partnership with ATI, so I’m excited to tell you that we’re supporting the development of the next generation of data scientists too. Alongside hiring a number of talented individuals to work at the ATI we will be supporting the PhD and Research Fellow programme which will help fulfil one of the core aims of the Institute in helping to bridge the skills gap and place the UK in a strong global position in this sector.


Solving the Big Data Challenges in Healthcare

Analysis of big data has the potential to solve some of the biggest challenges in healthcare which will help us deliver better patient care, including All-in-One-Day personalized medicine, unlocking the value of electronic medical records through natural language processing and making sense of the ever-increasing data produced by wearables and sensors. It’s an exciting time and we’re eager to see where this fantastic strategic partnership between Intel and the Alan Turing Institute takes us in the coming years. I look forward to keeping you updated in future blogs.


Intel is driving toward a day when cancer patients routinely have their tumor DNA sequenced and receive precision treatment plans based on their unique biomolecular profile—all within 24 hours. We call this vision All in One Day, and we believe that with the right blend of industry-wide commitment, innovation, and collaboration, we can deliver on that vision in 2020.


All in One Day isn’t an endpoint, though. I believe it’s also a step toward a world in which life science researchers use ultra-sophisticated 3-D models to simulate the workings of the human body and predict health outcomes. As Dr. Jason Paragas, director of innovation at Lawrence Livermore National Lab, likes to say, “We’d never ask an engineer to build a bridge or design an airplane without modeling how it’s going to perform in the real world. But doctors do the equivalent every day.”


If we can empower researchers with advanced biomedical models and simulations, we stand to transform the practice of medicine. Building on the genomics revolution, we may be able to take much more guesswork out of medicine and dramatically expand the universe of available diagnostics, treatments, and preventive approaches.


It’s going to take massive increases in computing performance to support these breakthroughs. In the United States, the President’s National Strategic Computing Initiative (NSCI) aims to advance the technologies needed for computers that are 100 times more powerful than today’s most capable supercomputers. Other nations are moving forward with similar initiatives.


I recently worked with two of my HPC colleagues to develop a whitepaper that explores precision medicine and discusses Intel’s role in enabling it.


We talk about the central role of Intel® Scalable System Framework and its ability to support the convergence of HPC modeling/simulation, health analytics, machine learning, and visualization that precision medicine will require.


We touch on key technology innovations as well as collaborations with life science leaders to create open source platforms, tools, applications, and algorithms for precision medicine.


And we note that the advances provided by these extreme-scale computers will help us address critical challenges like climate change and renewable energy sources as well as enabling progress toward predictive biology and precision medicine.


I hope you’ll read the whitepaper and share your thoughts. What opportunities do you see for life sciences computing to transform biomedicine?  What roadblocks are in the way?


Ransomware, it’s a word I’m seeing with increasing frequency amongst security experts. And it’s one I’m keen to let others know about within healthcare because the dangers are already having a major impact on organisations in health and life sciences. A couple of months ago it was reported that a hospital in Germany suffered a security breach which led to all Electronic Medical Records being locked in what at first appeared to be a ransomware attack, with the hospital confirming that the malicious virus had been sent from an unknown source. Fortunately, in this case, the hospital added that no patient information had been accessed but they had not yet calculated the cost to the organisation in regaining access to the data.


Ransomware Could Cripple The Ability to Deliver Care

When you consider that a personal health record can be 10x to 20x more valuable to a criminal than an individual’s credit card information you begin to understand the scale and importance of mitigating a wide range of security breaches for healthcare originations. Breach types like ransomware compound unauthorized access to sensitive patient information, compromising the ability of healthcare providers to access this information and crippling their ability to deliver care. No organisation is immune from breaches.


Security Workshop for Nordic Regions

That’s why I’m excited to welcome security experts from Intel, including David Houlding, Intel’s Healthcare Privacy and Security Lead, to Sweden at the end of May 2016 for a workshop to help healthcare organisations gain a better understanding of their breach security maturity, and benchmark their priorities across 8 breach types including ransomware, as well as 42 breach security capabilities with the rest of the health and life sciences industry. The event is invite only but if you are interested in finding out more on behalf of your healthcare organization and potentially attending please do get in touch today.


At the workshop, David will be talking through and helping organisations get the most out of the Security Maturity Model developed by Intel and a consortium of industry partners. It’s a fantastic resource and, no matter which country you are based in, I would recommend attending to help you and your organisation identify where your breach priorities or security capabilities fall short of the industry and established best practices, which will enable you to make more informed decisions about where and how to invest future security spending.


The Cost Of Under-Investment In Security

There is, of course, a cost to not investing in security too. In Sweden, I have seen an example of the cost to a healthcare organization which suffered a ransomware attack. An infected file was opened from a webmail application while a doctor was connected to the hospital network. The malware began encrypting local files and those stored on the network, which included patient data from connected health centres outside of the hospital. Additionally, there was also a .txt file containing a ransom note.


Fortunately, the IT support team noticed the attack within 90 minutes and were able to successfully stop backups of the infected data and close down unauthorized access to the network. After many hours of work to rectify the breach, network access was restored some 22 hours after the initial attack. I estimate that the cost in IT resource time alone was somewhere in the region of 20,000 Swedish Krona, which equates to approximately $2,500 or €2,200. The cost in time lost by clinicians having to use workarounds and the potential loss had personal data got into the wrong hands would be multiples of this figure.


Learnings From Healthcare Security Breaches

I’m always keen to understand what lessons can be learned from security breaches such as that explained above, because only then can we start to win the battle against these cyberattacks and keep patient data safe and secure. Intel’s Security Maturity Model is a huge step forward in helping healthcare organisations better understand where they are today and where they need to go in order to mitigate the risks of a breach. This is why I’m delighted that our workshop at the end of May will bring together healthcare organisations and Intel security experts here in Sweden to share their knowledge.


- Contact the author: Johan Liden

- Security Workshop, Sweden, May 31st – June 1st: Register your interest

- Intel Health and Life Sciences: Security and Privacy


Consumer health was one of the big trends that came out of HIMSS 2016. Patients using wearable technology and smartphone apps to collect and send data to physicians is making a dramatic impact on how healthcare research is performed.


One area where this model is already moving forward is in Parkinson’s disease research. Patients battling this disease usually see their physicians every six to 12 months. By utilizing technology, patients can regularly collect data on their movements, send the information to the cloud for analysis, and be better prepared for their next appointment. This process provides more value for each interaction with the doctor and from what we see, the patients are excited to be able to contribute data and help researchers combat this disease.


In the above video clip, Chen Admati, advanced analytics manager at Intel, explains how consumer health platforms such as wearable technology are helping in Parkinson’s disease and shows how Intel is working to develop new algorithms to analyze important information. The hope is to take the value from this research model and translate it to other disease platforms to combat some of the most prevalent health challenges facing us today.


Watch the video and let us know what questions you have about wearables and consumer health.

Healthcare providers today rely on an array of technologies to help manage key workflows, from maintaining electronic medical record (EMR) systems and performing clinical procedures to coordinating consultations and prescribing follow-up care. Yet in many cases, poor integration among technologies and outdated devices can waste time and hamper efforts to efficiently deliver high-quality care.


Refreshing older technologies with new solutions can improve caregiver collaboration, increase efficiency, and address security requirements. For example, caregivers need ways to securely share patient information among colleagues. They also need smooth, fast handoffs from one device or system to another—remembering multiple login passwords and transferring information among systems can be inefficient and time-consuming.


New Whitepaper: Workspace Transformation for Healthcare Providers


In today’s healthcare environment, enhancing communication and collaboration among clinical workers across the continuum of care is critical for producing optimal patient and financial outcomes.


Prescribing the Right Solution

Intel® mobile solutions are designed to help healthcare providers address workflow challenges simply and effectively, so they can refocus their time and resources on their patients. The latest generations of Intel® Core™ vPro™ and Core M vPro processors include technologies designed to easily integrate within the mobile healthcare environment, streamline workflows, and bolster security.


For instance, newer generations of devices can be equipped with WiGig technology, which can be combined with a number of commercially available wireless docking solutions. Those wireless docking stations enable healthcare providers to transition seamlessly between different clinical workstations without the hassle of connecting multiple wires. Clinical teams looking for improved ways to facilitate team-based care can also leverage the Intel® Unite™ platform, which provides a robust set of features to enable collaboration along with multiple layers of security through a simple and secure interface.


Further security measures in Intel® Identity Protection Technology (Intel® IPT) are built into the Intel Core vPro processor architecture. They provide role-based security that prevents unauthorized users from accessing healthcare systems, even if they have a stolen username and passcode.


Other solutions that depend on Intel® technologies include apps for capturing EMR data and using biometrics to access applications. All of these technologies are designed to improve clinical workflows and the patient experience.


The right mobile solutions and platform capabilities can simplify a wide range of communication and collaboration tasks. As a result, caregivers can stay focused on patients.

To learn more about how Intel® solutions are helping healthcare organizations achieve these goals, read Workspace Transformation for Healthcare Providers.


What questions do you have?

Bio-IT World is a great occasion to take stock and see what’s on the horizon. In a plenary keynote session on April 5, I spoke about three areas where we’re making progress toward achieving All in One Day precision medicine.


All in One Day is both a vision and a challenge. The vision is that if you’re diagnosed with cancer or another genetically-influenced disease, your clinical team will sequence your DNA and provide you with a precision treatment plan based on your biomolecular profile—all within 24 hours. To do that, they’ll scour massive databases, examining the known available treatments to find the ones that are most effective for people who most closely line up with your unique biology, age, lifestyle, and other factors. So you receive the treatment that’s likely to be most successful with the fewest side effects. The upshot: less anxiety and uncertainty, less trial-and-error treatment, and the likelihood of better outcomes.


With enough of the right kinds of innovation and focus, Intel thinks the goal is achievable by 2020. We’re working hard to make the vision a reality, and to make it practical enough for community oncologists to use as part of their clinical workflows.


Tools for Making the Most of Genomics Data


What kinds of innovation am I talking about? One crucial area is the development of open source tools for analyzing and managing genomics data.


Genomic analysis and precision medicine are massive big data applications. Increasingly, the limiting factor isn’t sequencing a genome, but assembling, analyzing, comparing, studying and storing it along with clinical and other data. At Bio-IT World, Intel and the Broad Institute of MIT and Harvard announced that we are advancing fundamental capabilities so large genomic workflows can run at cloud scale, as well as co-developing new open source tools to simplify the execution of large genomic workflows such as the Broad’s Genome Analysis Toolkit (GATK).


The Broad Institute released Cromwell, an integrated workflow execution engine designed to give organizations greater control by launching genomic pipelines on private or public clouds in a portable and reproducible manner. Broad and Intel also announced GenomicsDB, a novel way to store vast amounts of patient variant data and to process it with unprecedented speed and scalability. Broad is teaming up with Intel, Cloudera, and four leading cloud service providers to enable cloud-based access to GATK software. (Read more about optimized open source solutions on Intel® platforms.)


Collaborative Networks to Accelerate Breakthroughs


Solving massive challenges calls for deep collaborations across diverse institutions. For precision medicine, these collaborations must balance open data sharing with institutional control and rigorous protection of patient privacy.


The Collaborative Cancer Cloud, established last year by Intel and Oregon Health & Science University (OHSU), provides a robust foundation for such collaborations by enabling medical institutions to securely share insights from their private patient genomic data. The Cancer Cloud’s unique, federated approach to data sharing allows for rapid advances while overcoming many concerns about sharing sensitive datasets. At Bio-IT World, we welcomed the Dana-Farber Cancer Institute and the Ontario Institute for Cancer Research as recent additions to the Cancer Cloud.


Platform Innovation for Diverse Genomics Workloads


As powerful as today’s supercomputers are, All in One Day will require significant increases in computational capacity, performance, and throughput. Intel is driving progress on multiple fronts to help institutions manage, analyze, share, and store the expanding world of bio data.  We’ve created Intel® Scalable System Framework (Intel® SSF) as a next-generation approach to developing high-performance, balanced, efficient, and reliable computing (HPC) systems. We recently launched the Intel® Xeon® processor E5-2600 v4 product family, the first processor within Intel Scalable Systems Framework. Together with Intel® Xeon Phi™ processors, Intel® Omni-Path Architecture, Intel® Enterprise Edition for Lustre* Solutions, revolutionary Intel® Optane™ memory/storage technology, and other critical elements of Intel SSF, we’re dramatically advancing the capabilities needed for precision medicine.


What will All in One Day mean for your organization? What questions do you have? What do you need to do to get ready? Tell me in the comments.


Dig deeper:

Stay in touch:                         

  • @IntelHealth, @portlandketan

How long before we see a real and dramatic change in the way health and care services are delivered in England on a large scale? It’s a question you can be forgiven for asking – and subsequently thinking that we’re still a long way from achieving – but the recent announcements by NHS England around the Healthy New Towns (HNT) programme had me thinking about how bricks & mortar could be the catalyst for change that health and care services need.


Healthcare at the Heart of New Developments

The HNT programme will facilitate joined-up thinking from clinicians, designers and technology experts who will essentially start with a blank slate with house-builders creating new developments. From designing infrastructure which will make healthy activities such as walking and cycling safer (and thus more attractive) to the sharing of technology and information across a range of public services such as healthcare and social care, the programme aims to deliver better healthcare in a more efficient and economically sound way.


I think we’d all agree that a new approach to the provision of healthcare is needed in England and across the UK. Budgets are under pressure, we have an increasingly elderly population and chronic diseases such as diabetes and obesity are swallowing up huge resources. So what can new models of health & care services look like in a Healthy New Town and what advantages might it bring?


Utilizing Technology

NHS England’s Five Year Forward View clearly states that technology will play an important role in enabling change. Three key areas where I see technology bringing significant improvements for a Healthy New Town are:


  • Improved communication across the health and social care ecosystem – moving patient records to an electronic system ensures that patient information is always up-to-date and always available anytime and anywhere, whether that be on a desktop computer, on a hospital ward or on a 2 in 1 device in the hands of a community nurse. The data can be easily and securely shared too, amongst authorized parties such as social care teams, thus helping to deliver a seamless patient experience through primary, secondary and social care. Often, these electronic medical records are made up of unstructured case notes which may contain hidden value to clinicians. For example, North East London NHS Foundation Trust and Santana Big Data Analytics are working together on a project to extract value from unstructured case notes using data analytics for the benefit of health and social care teams. Read this whitepaper[PDF] for more insight on that project.

  • Making new homes more accessible and connected – there are some obvious and practical considerations around accessibility for those with mobility issues which should be easy to plan into a new-build property. I’m also keen to see how the concept of smart homes and the internet of things can be incorporated into new building developments and how such technologies could be used within new health & care models.

  • Accessing healthcare in new ways – millennials access many aspects of their daily lives through a connected mobile device, whether that be banking services, social media or checking on a utility bill for example; and healthcare will be no different. With faster high-speed internet connections and 5G mobile network capabilities coming soon I see the ways in which future generations access healthcare will change too, e.g. a face-to-face consultation with a GP may no longer be the first option for patients.


Those are just three examples but there are certainly more and I’d love to hear how you see this Healthy New Towns programme playing out and the benefits it can bring (leave a comment @IntelHealth on Twitter or contact me via LinkedIn). We need to take a more holistic approach to health and care to make a real difference, so the design of this type of new community is a step in the right direction.


Precision medicine is gaining traction worldwide. Countries like China, the UK and Saudi Arabia are all committing to enabling precision medicine to improve the health of their people. In the US, I have been honored to learn from, and serve on, the NIH advisory group for the President’s Precision Medicine Initiative (PMI). Recently, Intel made corporate commitments to help accelerate the PMI effort.  We’ve launched an industry challenge called “All in One Day” to make an individual’s precision treatment possible, easy, and affordable within 24 hours from genome sequence to customized care plan.


As I and my team travel around the world to drive this initiative, we are hearing a common refrain around the need for robust and secure ways to share data so we can accelerate the scientific breakthroughs and insights for precision medicine.  It is increasingly clear that secure data sharing—at a scale far beyond what today’s efforts have achieved so far—is a fundamental barrier we must overcome to scale precision medicine for all. Vice President Biden’s “cancer moonshot” effort, for example, is focusing on this crucial data sharing challenge.


To that end, we announced our work with OHSU on the Collaborative Cancer Cloud in August. Earlier today, Intel and OHSU were pleased to announce the expansion of the Collaborative Cancer Cloud to include Dana-Farber Cancer Institute and Ontario Institute for Cancer Research. I am excited to welcome them as fellow pioneers in collaborating on this personalized medicine platform.

Cancer research and institutions doing the research, benefit greatly when the size of the datasets are maximized. By participating in the Collaborative Cancer Cloud, the institutions increase the chances of making new discoveries and finding potential life-saving insights through collaborative analytics across patient datasets the institutions have collectively assembled.


The Collaborative Cancer Cloud is unique because it uses a federated approach, meaning the institutions don’t need to upload their data in a centralized location in order to share or run analytics on larger datasets. This approach overcomes many of the concerns around collaborating on sensitive datasets while having access to unprecedented volumes of data. This allows for secure, aggregated computation across distributed sites without loss of local control of the data, ensuring an institution’s ability to maintain proper custody of its datasets and protecting patient privacy and any institutional intellectual property that may result.


As more institutions join precision medicine platforms like the Collaborative Cancer Cloud, they will break trail on many important elements of collaborating in a federated environment. The Collaborative Cancer Cloud is designed to allow researchers to determine how and when their data will be used. For example, while the Collaborative Cancer Cloud does provide a standard set of tools, it is the institutions who determine what tools they will use and what tools can be used on their data. This type of personalized medicine platform is designed to evolve and adapt to meet the needs of the institutions using it, and not having the institutions conform to the tools they are using.


With the announcement today of OICR and DFCI helping Intel and OHSU to prove out and scale out these tools, it feels like the All in One Day is one step closer. But we have many miles to go to drive the kind of security, the kind of scale, the kind of collaborative data sharing that will be needed to accelerate the research, and thus the clinical options, for not only people with cancer but a wide range of diseases. We look forward to bringing on more collaborators, more data, and more tools-makers in the near future.


Learn more about Intel Life Sciences www.intel.com/healthcare/lifesciences

If you were living in England in 2007, you probably remember the tragic death of Baby P.

Little Peter Connelly, age 17 months, died that year after sustaining more than 50 injuries over eight months at the hands of his caregivers. Despite numerous encounters with the healthcare and social care systems, Peter fell through the cracks. He died before anyone recognized the pattern of his injuries and intervened successfully to save him.


Peter’s death epitomizes the question that plagues case workers and clinicians around the world: How can I prevent the next Baby P? With heavy caseloads and many organizations involved, how can conscientious clinicians and caseworkers - whether they work with children, the frail elderly, victims of domestic violence, or other vulnerable individuals - assess each client’s life and health, identify clients who are at greatest risk, and get the right resources to them at the right time?


To accomplish this, clinicians and case workers need a comprehensive picture of the client’s encounters with diverse agencies. Oftentimes, valuable information is housed in clinical case notes and incompatible record-keeping silos, leaving care providers with only a partial view of the client’s health situation.


Now, there’s technology that can help. The North East London National Health Service (NHS) Foundation Trust (NELFT) recently worked with Intel and Santana Big Data Analytics Ltd. (Santana BDA) on a proof-of-concept project demonstrating a practical, affordable tool for extracting relevant information from large volumes of clinical case notes.


The Santana solution uses sophisticated big data analytics techniques to search through text-based clinical notes from diverse sources, such as those made by GPs, psychiatrists, community nurses, school nurses, and others. As it searches, it extracts crucial information and then presents it in a quick, easy-to-review format to authorized care professionals. Using these results, care professionals may be better able to:

  • Get value from written notes that are too voluminous for practical, timely review by humans
  • Gain a more complete understanding of the patient’s health and circumstances
  • Identify risks and prioritize caseloads to help ensure critical needs are met
  • Respond proactively rather than re-actively
  • Make better use of consultation time and conduct more focused, relevant dialogue with patients
  • Improve resource utilization through earlier intervention and potentially avoiding hospital admission

As a nurse and a former locality commissioner, I recognize just how important these technology innovations are that might help us prevent another Baby P. I invite you to read this recent paper from Intel, NELFT and Santana BDA which outlines our collective work to reduce risk and improve care.


I was saddened, like many here at Intel, to hear of the passing of former Intel Chairman and CEO Andrew Grove. Many kind words have been spoken about Andy in the past few days but I wanted to add my own tribute here, and talk a little more of his philanthropic work specifically around healthcare and education.


Andy had deeply personal reasons to donate tens of millions of dollars to translational cancer research and neurodegenerative diseases following diagnoses of cancer in 1995 and Parkinson’s disease in 2000.


He took on the battle with these diseases by immersing himself in the detail of the condition, the potential treatments and thoughts about how as a patient he could receive a better outcome. And how he could receive a better outcome often started with Andy looking at how existing data could be put to better use and how it could be collated and analysed more effectively in the future, not just for himself but for future generations.


As an advisor to the Michael J. Fox Foundation for Parkinson’s Research, Andy was focused on more and faster research into the disease which would affect him from 2000. Donating over $20m to Parkinson’s research and bequeathing $44m more to the Foundation was just one aspect of his generosity, whereas his drive to turn medical research into something practical for patients was his priority.


Better use of data to make informed decisions was a message Andy would carry with him wherever he went, be it Government, Medical Researchers and to colleagues here at Intel. I highly recommend reading this Forbes article on Andy which outlines in great detail his multi-faceted generosity in the field of healthcare.


But it wasn’t just healthcare that motivated Andy to give up his time and personal wealth, education was a real focus for him too. Specifically, Andy was hugely passionate about the value of vocational training, funding scholarships initially to schools and then to community colleges. He was vociferous and generous in equal measure when it came to illustrating that a vocational education is a real opportunity to provide a successful pathway to a thriving career.


Andy’s recent passing really brings into focus the importance of the work we are doing here at Intel today. We’re pushing ahead at a rapid rate to achieve All-in-One-Day diagnosis and personalized treatment for cancer patients and our partnership with the Michael J. Fox Foundation for Parkinson’s Research is working hard to harness the power of big data to measure Parkinson's disease symptoms and progression.


His memory will live on and his passion for healthcare, education and technology will drive us all forward to make these big advances, more quickly.

by Charlotte Rasmussen


As discussed in an earlier blog post, QIAGEN has been working together with Intel to bring infrastructure together with genome analysis tools to enable massively scalable whole genome analysis at lower cost. Now, there’s a new white paper detailing the reference architecture and other technical information for our joint solution.


Designed to help NGS scientists keep their sequencing pipelines running smoothly even at capacity — all while saving money and producing better results — our solution provides whole genome analysis for as little as $22 per genome. It meets the computational and analysis demands of Illumina’s HiSeq X Ten, but Intel’s 32-node offering can save researchers up to $1.3 million in total ownership costs compared to the 85-node cluster recommended by the vendor for a BWA+GATK variant calling pipeline.int_brand_879_LabDocTblt_5600_cmyk._lowresjpg.jpg


Here’s a quick look at what makes our solution different:


  • Built-in analysis tools: The system uses Biomedical Genomics Server solution.


  • Scalability: Designed to scale on-demand for computing, networking, and storage, the cluster allows labs to manage capacity easily and cost-effectively.


  • Proven accuracy: While efficiency and cost-effectiveness is an important factor for NGS data analysis, accuracy in both variant calling and interpretation for the solution is proven to be among the best.


  • User friendly: The solution masks the complexity of cluster computing with the easy-to-use Biomedical Genomics Workbench.


  • Fast connection to data: We used a high-speed interconnect system based on Intel True Scale Fabric to link the compute nodes and centralized storage, providing up to 40 Gbps of bandwidth per port.


  • Parallel storage: The solution incorporates Intel Enterprise Edition for Lustre, the world’s leading parallel storage system, to keep all the nodes, cores, and threads operating at high efficiency.


For more details, check out the full white paper.


Our tests showed that the 32-node system could process and analyze 48 genomes in 24 hours, on average — enough capacity to handle all the data produced by a HiSeq X Ten. We also tested the system with exome data and successfully analyzed approximately 1,440 human exomes every 24 hours.


Together with Intel we’ll be presenting this joint solution at the upcoming Bio-IT World conference in a presentation addressing the growing demand for population-scale genomics.


Bio-IT World 2016


April 5-7 we’ll be in Boston at the Annual Bio-IT World Conference & Expo and together with Intel, we’ll be presenting this joint solution in a presentation addressing the growing demand for population-scale genomics. We’ll demonstrate how our companies have partnered to design a reference architecture to address these challenges in a cost-effective manner.


You can visit us in booth #229 and you’re of course very welcome to join our presentation on Wednesday, April 6:


Title: A Reference Architecture for High-Volume Whole Genome Data Analysis

Date and time: April 6 at 3:30 PM,

Location: Vendor Theater

Speakers: Mikael Flensborg, Director of High Volume Sequencing Solutions, QIAGEN and Michael McManus, Senior Health & Life Sciences Solution Architect, Intel


If you’d like to learn more but are not able to attend the conference, please feel free to email us.


We're looking forward to seeing you in Boston!


More information about Bio-IT World


Charlotte Rasmussen is a scientific correspondent at QIAGEN, where she summarizes and communicates scientific information and customer stories.

The elderly account for two million unplanned admissions (68% of total admissions) per annum in the UK and the number is growing.  In some areas of the country each 65+ year old spends 4 days per annum as an unplanned admission in a hospital bed.  Care of the elderly, in this regard, costs the NHS £8.3bn per annum. This is a small amount in comparison with the social care costs and the wider personal and economic costs which encompass items like loss of economic productivity due to carer commitments.


Most of the cost arises from issues only emerging when patients present in an acute care setting. The issues associated with overcrowded geriatric wards, lack of capacity in social care beds, problems reintroducing patients back into their own home settings, higher than optimal length of stay and exacerbation of co-morbidities are all well known. Many times patients present with falls and subsequent breaks - this is what the structured record in the Electronic Medical Record (EMR) sees


Managing avoidable admissions

Almost always this is not the cause of the presentation and therefore the admissions are avoidable. The information that allows clinicians, service designers and payers to address this issue does not lie, fully, in the structured EMR data – it lies in Case Notes. Santana Big Data Analytics is a company geared to unlocking the value of these notes and our project with North East London NHS Foundation Trust (NELFT) is an example of our transformative technology in action.


NELFT provides integrated community and mental health services for a diverse population of almost 1.5 million people living in the London Boroughs of Waltham Forest, Redbridge, Barking & Dagenham and Havering. Additionally, the Trust also manages community health services in south-west Essex. NELFT has an annual budget of more than £325 million in 2013/2014. employs around 5,500 staff and is a recognised research leader and innovator, partnering with diverse academic and private-sector leaders to explore new approaches to improving the quality of its services.


High Quality, Succinct Case Notes are Key to Success

As a Community Provider, part of NELFT’s role is to provide services that prevent admissions and allow people to live longer healthier lives by consuming services away from hospitals. Services include those that are designed to be preventative, rehabilitatory and quality of life preserving. Clinicians in NELFT recognize that high quality, succinct case notes are key to the way they operate their services.


Savings are made and benefits are gained, by changing clinical and operational practice. Although there are large volumes of data to be processed using Big Data tools and techniques, this is really a small data problem: what information can be delivered in a consumable format to clinicians to inform care?


Trigger Alerts, Identify Unmet Needs and Prioritize Care

Clinical requirements for information are often expressed in terms of need for integrated care records that go beyond the coded EMR / commissioned care pathway data set and present data in a more timely way to integrated and yet virtual teams. They need to focus quickly on what is important without having the time to “train importance” into their data.


In NELFT there are a number of EMR systems, which contain data that needs to be seen in the context of Social Care data from the local Council and other care providers, including the independent sector and primary care. NELFT have an award winning business intelligence platform which is well used by many staff, however it relies on structured and often coded (thus latent) EMR data. Clinicians like the idea of a single source of clinical and operational truth in a web-based, mobile / private cloud environment, but need it to do more.  They need it to produce alerts, identify unmet need, prioritize care and most importantly have a complete overview of all the organisation knows about the patient.

Extracting Structured Information from Unstructured Text

Enter Santana BDA. Santana Big Data Analytics brings together proven expertise in the fields of business and clinical intelligence, data analytics, big data and natural language processing (NLP). They worked in conjunction with the NELFT Performance Team to address the above issues using NLP, a technique for automatically extracting structured information from unstructured text. It provides a way of generating large amounts of coded clinical data without additional data entry requirements. The potential uses of this data are enormous - case summaries, monitoring performance, critiquing clinical decisions, screening risk, etc. Natural Language Processing also provides a way of readily combining information from different electronic records systems.


Intel Implementation of Cloudera Technology Stack

Intel have been instrumental in this project. Having worked previously with the team that founded Santana BDA in Leeds, Intel were well positioned to provide a wide range of inputs. This ranged from user experience design, big data processing and technology optimization support alongside clinical input to assist the Santana BDA team with hardware provision. Having understood the collective needs of the team of NELFT and Santana BDA staff, Intel worked to implement the Cloudera technology stack which is driving the project.


Ultimately this resulted in the design of an NLP appliance where the technology is optimized for the fast processing of Case Notes and the derivation of clinical meaning through the design and implementation of sets of data classifiers. The classifiers are created using machine learning and recognized, high quality clinical research.


The Santana Big Data Analytics engine is architected to run in a secure cloud or server cluster running on premise or externally. The initial implementation of Santana NLP engine used SQL Server technology to process the data. This worked well at NELFT for processing batches of 100,000 patient records. To create a solution that can process larger volumes of historical data, the Santana team are working with Cloudera to utilise the power of Apache Hadoop.


They have implemented the NLP engine as a scalable appliance running on Cloudera Distribution for Apache Hadoop (CDH) Enterprise. Both implementations run on scalable infrastructure powered by Intel® Xeon® processors.


A Flexible, Affordable and Scalable Platform for Analyzing Unstructured Data

Apache Hadoop is an open-source software framework that allows massive data sets to be distributed and processed across clusters of computers. Hadoop offers a flexible, affordable, and scalable platform for analyzing unstructured data, as well as for other analytics scenarios where the velocity, volume, and variety of data make them impractical for traditional databases. Cloudera CDH provides enterprise capabilities for Apache Hadoop processing along with system management capabilities that make it well-suited to deployment in healthcare and other enterprise environments.


Reducing the Complexities of Existing Infrastructure

The close collaboration among NELFT, Santana, and Intel, coupled with Santana’s methods and tools, meant that the appliance could be installed quickly in the NELFT infrastructure. Within a few weeks the team had gathered the data and the Santana engine on the Cloudera and Intel appliance churned through records in seconds that would have previously taken months of labour to read and analyse manually.


The use of modern Big Data technologies, if architected well, can reduce many of the complexities with existing infrastructures and with the use of NLP can provide information to a clinical or operational organisation with a forward looking approach and can augment Business Intelligence (BI) systems that apply a rearward looking dashboard system currently in use today.


Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center