1 2 3 4 Previous Next

Intel Health & Life Sciences

51 Posts authored by: JULIE MALLOY


Based on what we heard at Supercomputing last month, it’s clear that bio IT research is on the fast track and in search of more robust compute power.


In the above video, Michael J. Riener, Jr., president of RCH Solutions, talks about dynamic changes coming to the bio IT world in the next 24 months. He says that shrinking budgets in research and development means that more cloud applications and service models will be implemented. When it comes to big data, next generation sequencing will heighten the need to analyze data, determine what data to keep and what to discard, and how to process it.


Watch the clip and let us know what questions you have. What changes do you want to see in bio IT research?


With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.


Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?


What questions about HPC do you have? 


If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

What better place to talk life sciences big data than the Big Easy? As temperatures are cooling down this month, things are heating up in New Orleans where Intel is hosting talks on life sciences and HPC next week at SC14. It’s all happening in the Intel Community Hub, Booth #1315, so swing on by and hear about these topics from industry thought leaders:


Think big: delve deeper into the world’s biggest bioinformatics platform. Join us for a talk on the CLC bio enterprises platform, and learn how it integrates desktop interfaces with high performance cluster resources. We’ll also discuss hardware and explore the scalability requirements needed to keep pace with the Illumina HiSeq X-10 sequencer platform, and with a production cluster environment based on Intel® Xeon® processor E5-2600 V3. When: Nov. 18, 3-4 p.m.


Special Guests:

Lasse Lorenzen, Head of Platform & Infrastructure, Qiagen Bioinformatics;

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics


Find out how HPC is pumping new life into the Living Heart Project. Simulating diseased states, and personalizing medical treatments, requires significant computing power. Join us for the latest updates on the Living Heart Project, and learn how creating realistic multiphysics models of human hearts can lead to groundbreaking approaches to both preventing and treating cardiovascular disease. When: Nov. 19, 1-2 p.m.


Special Guest: Karl D’Souza, Business Development, SIMULIA Asia-Pacific


Get in sync with scientific research data sharing and interoperability. In 1989, the quest for global scientific collaboration helped lead to the birth of what we now call the Internet. In this talk, Aspera and BioTeam will discuss where we are today with new advances in global scientific data collaboration. Join them for an open discussion exploring the newest offerings for high-speed data transfer across scientific research environments. When: Nov. 19, 2-3 p.m.


Special Guests:

Ari E. Berman, PhD, Director of Government Services and Principal Investigator, BioTeam;

Aaron Gardner, Senior Scientific Consultant, BioTeam;

Charles Shiflett, Software Engineer, Aspera


Put cancer research into warp speed with new informatics technology. Take a peak under the hood of the world’s first comprehensive, user-friendly, and customizable cancer-focused informatics solution. The team from Qiagen Bioinformatics will lead a discussion on CLC Cancer Research Workbench, a new offering for the CLC Bio Cancer Genomics Research Platform. When: Nov. 19, 3-4 p.m.


Special Guests:

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics


You can see more Intel activities planned for SC14 here.


What are you looking forward to seeing at SC14 next week?


The promise of personalized medicine relies heavily on high performance computing (HPC). Speed and power influence the genome sequence process and ultimately patient treatment plans.


With the SC14 Conference coming up next month, we caught up with Carlos Sosa, high performance computing architect at Cray, Inc., to hear his thoughts on the state of HPCs. In the above video clip, he says that personalized medicine is on the way but that HPC technology needs to be more robust to answer questions quickly for patients and doctors.


He cites a University of Chicago workflow that used parallel machines to sequence genomes and performed 47 years of research in just 51 hours as an example of moving toward personalized medicine capability.


Watch the clip and let us know what questions you have about HPCs and personalized medicine. What are you seeing?


In the above video, Cycle Computing CEO Jason Stowe talks about the strong disconnect that exists between research and clinical analysis. He says the current challenge in bio IT is to analyze data, make sense of it, and do actionable science against it.


He shares an example of a 156,000-core workload run in eight regions of the globe that produced 2.3 million hours of computational chemistry research (264 years’ worth) in just 18 hours. He says this capability will transform both access patterns and the kinds of research that pharmaceutical, life sciences, and healthcare companies are able to tackle when it comes to analyzing genomes.


Watch the clip and let us know what you think. What questions about research and clinical analysis do you have?

Below is the second in a series of guest blogs from Dr. Peter J. Shaw, chief medical officer at QPharma Inc. Watch for additional posts from Dr. Shaw in the coming months.


With all the recent advances in tablet technology, the way pharmaceutical sales professionals interact with health care providers (HCPs), and in particular doctors, has changed. Most pharmaceutical companies are now providing their sales teams with touch screen tablets as their main platform for information delivery. The day of paper sales aids, clinical reprints and marketing materials is rapidly fading. The fact is that doctors have less time to see sales professionals during their working day and there are increasing restrictions on access to doctors by many institutions. Therefore, the pharmaceutical industry is having to be more and more inventive and flexible in the way that it approaches doctors and conveys the information needed to keep up-to-date on pharmaceutical, biotech and medical device advances.


  • How has this impacted the life of the sales professional?
  • How have pharmaceutical companies adapted to the changes?
  • To what extent has the use of mobile devices been adopted?
  • What impact has this had on the quality of the interaction with HCPs?
  • What are alternatives to the face-to-face doctor visit?
  • How have doctors received the new way of detailing using mobile technology?
  • What do doctors like/dislike about being detailed with a mobile device?
  • What does the future look like?
  • Are there any disadvantages to relying solely on mobile technology?


To answer some of these questions, and hopefully to generate a lively discussion on the future of mobile technology in the pharmaceutical sales world, I would like to share some facts and figures from recent research we conducted on the proficiency of sales reps using mobile devices in their interactions with HCPs, and the impact this has had on clinical and prescribing behaviors.


  • In tracking the use of mobile devices for the last three years, it is clear that there is variable use of mobile devices by sales professionals.
  • Where sales reps only have the mobile device, they are using them in only 7 to 35 percent of interactions with HCPs.
  • The use of mobile devices increases with the duration of the interaction with HCPs, in that the device is used in almost all calls lasting over 15-20 minutes.
  • Many reps do not use mobile devices in calls under 5 minutes. Often this is due to the non-interactive nature of the content, or the awkwardness of navigating through required multiple screens before arriving at information relevant to that particular HCP.
  • We have data to show that where the mobile device is very interactive and the sales rep is able to use it to open every call, the call will be on average 5-7 minutes longer with the doctor than if it is not used.
  • In cases where doctors will take virtual sales calls, these calls are greatly enhanced if there is a two-way visual component. Any device used in virtual sales calls much have a two-way video capability as the HCP will expect to see something to back up the verbal content of the sales call.
  • Most doctors feel that the use of mobile technology in face-to-face calls enhances the interaction with sales reps provided it is used as a means to visually back up the verbal communication in an efficient and direct manner.
  • Screen size is the main complaint we hear from HCPs. Most say that where the rep is presenting to more than one HCP the screen needs to be bigger than the 10” that is on most of the devices currently used by reps.


The mobile device is clearly here to stay. HCPs use them in their day-to-day clinical practice and now accept that sales professionals will also use them. When the mobile device is expected to be used as the sole means for information delivery, more work needs to go into designing the content and making it possible for the sales professional to navigate to the information that is relevant to that particular HCP. All aspects of the sales call need to be on the one device; information delivery, signature capture and validation for sample requests, and ability to email clinical reprints immediately to the HCP are just the start.


In part 2, we will look at how sales reps are using mobile devices effectively and the lessons to be learned from three years of data tracking the use of these devices and the increasing acceptance of virtual sales calls.


What questions do you have?


Dr. Peter J. Shaw is chief medical officer at QPharma Inc. He has 25 years of experience in clinical medicine in a variety of specialties, 20 years’ experience in product launches and pharmaceutical sales training and assessment, and 10 years’ experience in post-graduate education.

For genomic researchers, speed and cost drive their day-to-day data generation activities. When given a choice, most researchers will choose longer wait times for results if it means they can achieve more samples. 


In the above video, Tim Fennell, director of bioinformatics for the genomics platform at The Broad Institute, talks about the organization’s genomic data generation and data processing pipelines, plus how that data is provided to researchers in the community. He says that the value in genomic research is how quickly and how inexpensively research analysis can be executed.


What do you think?  Does speed or cost win in your data analysis projects?


When it comes to bio IT, data is the key that drives progress. In the above video, Pek Lum, vice president of solutions and chief data scientist at Ayasdi, and Mikael Flensborg, director of global partner relations at CLC Bio, talk about how to make big data into small data so that it’s accessible to physicians and can be leveraged to tackle complex issues like cancer.


Watch the clip and let us know what questions you have about big data in bio IT.


Big Health

Posted by JULIE MALLOY Aug 18, 2014

Below is a guest post from Kyle H. Ambert, PhD, Intel Graph Analytics Operation.


Trends come and go, in the analytics world. First, everything is supercomputing, then everything is distributed computing. SQL. NoSQL. Hadoop. Hadoop! HADOOP! And then, Spark makes its way onto the scene, changing everything yet again. Navigating this alphabet soup of analytical spare parts is enough to make even the most devoted of data scientists wish they had listened to their respective mothers and become physicians.


As a graduate of Oregon Health & Science University School of Medicine, I lived at the forefront of where big data technology meets healthcare, while researching the biomedical and clinical applications of artificial intelligence, or “big health,” as I liked to refer to it. Like most data scientists, while there, I found myself spending a great deal of my time writing code to simply acquire data sets, format them in a sensible way, and remove uninformative or misleading information it may contain. This, most would agree, is what's referred to as "the essential pre-processing steps of data analysis," or, "the boring stuff," in technical parlance. The "development and application of analytical algorithms", or, "the reason I got into this business in the first place," was often relegated to an unfortunately modest fraction of my day.


An Experienced Programmer reading this is likely to observe, "well, the obvious solution to your problem is to write a software library abstracting out the repeated steps in your so-called boring stuff." Astute as ever, Experienced Programmer, but what of the ever-increasing population of domain experts who need to gain insights from their own data, but don’t write code? What of the physician who wants to examine the relative rates of diabetes diagnoses in their practice over time? Will you be the one to look the population geneticist writing a meta analysis in the eye and say, "I'm sorry, but if you want to do a large-scale text-mining study of the publications in your field, you're going to have to learn to program on the streets"? I couldn't do it.


That's why, when I was given the opportunity to join Intel's Graph Analytics Operation to guide the development of the Intel Analytics Toolkit (IAT) for end users in biomedicine, I jumped at the chance. Graph analytics enables users to analyze data using methods that take into account the relationships inherent in their data. With IAT, we enable biomedical researchers and physicians to use this technology to gain insight from networks of biomedical information. Developed with scalability in mind, we've protected the user from the laborious steps of working with big data in a distributed environment, creating an intuitive user interface to a suite of powerful analytical tools.


The real power here, for the aspiring data scientist, is that all the tools needed for importing, cleaning, storing, and analyzing data are all in the same place—no more writing code to connect an xml parser to a database; no more figuring out how to write analyses that efficiently scale to big data, or that are happy to work in a distributed environment—we’ve taken care of that for you. This, we’ve found, drastically decreases the time spent in the monotonous steps of data analysis, letting analysts focus on understanding their results—the reason they got into their business in the first place.


This month, we began a limited trial of the IAT, and we're partnering with university hospitals, private medical research organizations, and health insurance companies to better understand the needs of the biomedical and clinical communities, in terms of scalable data analysis. What we're already learning is that there is a huge need in the medical community for large-scale graph analytics, particularly when it comes to developing an integrated representation of heterogeneous data types—such as are found in electronic health records, or are used to inform Clinical Decision Support systems.


What questions do you have? To learn more about the IAT, watch this video, or see intel.com/graph. And, of course, if you have a biomedical data analysis problem you'd like to work on with us, or if you’d like to join the limited trial, leave a comment below.

Healthcare technology covers a wide continuum—from clinician/patient interaction using tablets and smartphones to research scientists analyzing genomic data to discover new personalized medicine strategies. All of these healthcare-related activities rely on computing power and are more connected than ever.


That’s why our community is expanding its coverage to include more content focused on the extensive range of healthcare activities that integrate technology for the betterment of patient care. Today, we are announcing the Intel Health & Life Sciences Community as the new umbrella name for this destination that will explore all things related to health IT and bio IT technology.


What does this mean for you? In addition to the healthcare IT device blogs and videos that have been the main focus the past few years, you’ll also see and hear about how genomic research, big data, wearables, and high performance computing impact clinical interactions and individual treatment plans. It’s vital that healthcare CIOs, administrators, clinicians, and researchers understand and connect the dots on these important topics, so our role is to help facilitate these conversations and provide educational stories that can help you in your jobs and daily workflows.


Issues that impact both health IT and bio IT—like security, cloud, mobility, interoperability—will continue to be dedicated topics that we’ll explore regularly with both Intel experts and your peers. In addition, the community will present blogs and videos from the bio IT ecosystem that focus on foundational industry topics such as compute power, sequencing, and personalized medicine.


Because of this new focus, our Twitter handle is changing to @IntelHealth to reflect the broader scope of content. Follow the handle for the latest links to new content and to interact with others in the community.


It’s a great time to be in the healthcare technology arena. We’re excited about this expansion of healthcare technology coverage and welcome your feedback and contributions. If you have suggested topics you’d like to see covered, or would like to contribute a guest blog, please let me know. Leave a comment below or send me a note here in the IT Center.


Also, be sure to sign up to receive email newsletters and communications from the Intel Health & Life Sciences team and be part of the conversation.


What questions do you have?


Julie Malloy is the Life Sciences Industry Manager at Intel Corporation. See her other posts here.


Genomic testing is becoming mainstream for patients. Companies like 23andMe are bringing affordable testing options to the market and allowing patients to learn more about their genetic makeups. This leads to better care and treatments.


If you ask researchers about this trend, they’ll most likely say that the real value of more people participating in genomic testing is that it provides an opportunity to test their theories through analysis of de-identified genomic data of certain patient sets.


In the above video, Steve Schwartz, vice president of business development and strategy at 23andMe, talks about the company’s approach to bringing genetic testing to the masses, and how data can be useful in research that ultimately can improve patient care.


Watch the clip and let me know what questions you have. What value do you see in the availability of more genomic data?

Below is a guest blog from Ketan Paranjape, director of personalized medicine at Intel.


There has been a lot of news recently about cloud deployments in life sciences and genomics. With the push towards taking genomics mainstream through clinical deployments, cloud computing may not be something you think about right off the bat. With all the privacy and security rules, like the EU's General Data Protection Regulation, or the U.S. Health Insurance Portability and Accountability Act (HIPPA), naturally you are concerned and want to stay local and on-premise.


There is therefore a need for turnkey "appliances" that can operate independently with and without the cloud. Last year, BioTeam and the Galaxy Project formed a strategic alliance and introduced the SlipStream Appliance: Galaxy Edition -- a high-performance, server class device pre-loaded with a fully operational Galaxy analysis platform. By using SlipStream Galaxy, the average lab can save up to one month of deployment time with start-up cost savings (typically charge-backs to IT department) that are easily more than $20,000. The SlipStream Appliance is architected to deliver power, expandability, and affordability.


Today, we are announcing a strategic partnership between BioTeam, Intel and SGI to roll out a new version of the SlipStream Appliance. The system will contain - 2x 10-core (20 cores total) Intel® Xeon Ivy Bridge E5-2600v2, 512GB to 1TB ECC RAM, 2x120GB Data Center SSD + 8x4TB SAS 6Gbps Enterprise HDD and Dual-port 10GbE standard. Details on the software and support categories for this Appliance can be found here.


With the increasing throughput of data generation instruments, the dynamic landscape of computational tools, and the variability in analysis processes, it is challenging for scientists to work within the confines of a static infrastructure. At the Galaxy Community Conference this week in Baltimore, BioTeam will discuss some of these challenges and the technical advances they have been working on to build a more flexible Galaxy Appliance to support the changing compute and analysis needs of the scientific researcher. James Reaney, Senior Director at SGI, will also be giving more details on the SlipStream Appliance.


From a recent article I read by Joe Stanganelli on Cloud Security FUD Drives Genomics Industry towards Cloud-in-a-Box: "Of course, the choice between cloud computing and on-premises processing is not mutually exclusive. Cloud security is a worry, but so are the scalability and cost of on-premises devices. Local processing consoles that can work independently of the cloud or be cloud-enabled offer the best (and worst?) of both worlds."


Regardless, any decision about whether to go to the cloud or the "anti-cloud" (or both) must involve serious cost-benefit analysis.


What questions do you have about cloud computing in life sciences?

Below is a guest blog from Ketan Paranjape, director of personalized medicine at Intel. He is speaking tomorrow, May 22, at the Big Data in BioMedicine Conference at Stanford University starting at 2 p.m. You can watch the live stream of the conference here and get an inside look at what’s going on with big data and bio IT. Follow the #bigdatamed hashtag on Twitter during the three-day conference.

Over the weekend I was perusing the latest edition of the New York Magazine and read this article about a cancer doctor who lost his wife to cancer. After I put my tablet down, I hugged my wife, grabbed the “honey do list” from her hand and got to work. Whether I was comparing the right mix of fertilizers for my lawn at the neighborhood home improvement store or driving my son to his tennis practice, all I could think about was, “what could we have done to prevent what happened and also help the 9 million or so cancer patients worldwide?”


Here at Intel, we are trying our best to answer that very question. We have laid out a vision for delivering “Personalized Medicine at the touch of a button…Everywhere…Everyday…and for Everyone…” I suppose you need an app for everything these days.


But you would wonder how could a hardware company that makes “chips” for computers make a difference? The answer from our vantage point is Big Data Analytics. If we can create an ecosystem of hardware, software and services players in healthcare and life sciences that generate, manage, share, and analyze data across the multiple silos of payers, government, providers, clinics, pharma, clinical trials, genomics, and wellness data, we should be able to help the key decision makers with the right amount of well curated information.


So starting out with what we do best, we optimized software programs for analyzing data. We were able to cut the run time of Broad’s Genome Analysis ToolKit (GATK) from three days to one day and speed up Ayasdi’s Cure software by 400 percent. These reductions in run times and performance improvements help speed up the number of tests you can do and help the researcher or clinician get to an answer quicker.


We then built appliances – highly optimized hardware and software solutions that target specific genomics or clinical problems. The appliance with Dell reduces the RNA seq run times from seven days to three hours and the Genalice appliance can map 42 whole genomes in 18 hours. As you start running out of machines or clusters in our backyard, we go to the cloud. Working with Novartis, Amazon Web Services (AWS), Cycle Computing and MolSoft, we were able to provision a fully secured cluster of 30,000 CPUs and completed screening 3.2 million compounds in about nine hours compared to 4-14 days on existing resources.


We have also embraced the power of both Hadoop and in-memory analytics. Working with Nextbio (now part of Illumina), we were able to get a 90 percent gain in throughput and 6x data compression enabling researchers to discover biomarkers and drug targets by correlating genomic data sets. Working with SAP and Charite we built a ‘real-time’ cancer analysis platform on SAP-HANA that analyzed patient data in seconds compared to two days. BTW, this data set had 3.2M data points per patient and up to 20 TB of data/patient.


These are just a subset of projects we have completed working with our broad range of ecosystem partners. For more details please visit – www.intel.com/healthcare/bigdata or drop me a line – Ketan.paranjape@intel.com. I’m also speaking tomorrow, May 22, at the Big Data in BioMedicine Conference at Stanford University starting at 2 p.m. You can watch the live stream of the conference here


By the way, regarding the fertilizer we decided on a good Nitrogen mix (2 lbs of Nitrogen per 1000 sq. feet) and my son moved from level 1 to level 1.5 in his class.


What questions do you have?


Wearable health IT devices have been a hot topic for the past year or so. To find out more about the promise of patient generated health data and what CIOs need to be thinking about, Intel Health & Life Sciences General Manager Eric Dishman sat down with Dr. Andrew Litt from Dell, Dr. Bill Crounse from Microsoft, and Dr. Graham Hughes of SAS to discuss the advent of new wearable health IT devices and the potential impacts on patient care.


The above video is the third clip in a series from this conversation. See the other clips on making health IT data actionable and the benefits of health IT analytics.


What questions do you have about health IT wearable devices?


Making big data actionable is one of the key components of healthcare analytics. It’s on the mind of many CMIOs who are planning for the explosion of data coming into their infrastructures.


In the above second clip from our special roundtable discussion, Intel Health & Life Sciences General Manager Eric Dishman talks with Dr. Andrew Litt from Dell, Dr. Bill Crounse from Microsoft, and Dr. Graham Hughes of SAS to discuss genomic data and how high performance computing and personalized medicine is making big data actionable.  See the first clip from the conversation here.


Watch the above conversation and let us know how you are making big data actionable. What are the first steps that you took?


Look for additional clips from this discussion in the coming weeks as the panel addresses healthcare costs and wearable technology.

Filter Blog

By date:
By tag: