1 2 3 4 Previous Next

Intel Health & Life Sciences

54 Posts authored by: JULIE MALLOY

The transition toward next-generation, high-throughput genome sequencers is creating new opportunities for researchers and clinicians. Population-wide genome studies and profile-based clinical diagnostics are becoming more common and more cost-effective. At the same time, such high-volume and time-sensitive usage models put more pressure on bioinformatics pipelines to deliver meaningful results faster and more efficiently.

 

Recently, Intel worked closely with Seven Bridges Genomics’ bioinformaticians to design the optimal genomics cluster building block for direct attachment to high-throughput, next-generation sequencers using the Intel Genomics Cluster solution. Though most use cases will involve variant calling against a known genome, more complex analyses can be performed with this system. A single 4-node building block is powerful enough to perform a full transcriptome. As demands grow, additional building blocks can easily be added to a rack to support multiple next-generation sequencers operating simultaneously.

 

Verifying Performance for Whole Genome Analysis

To help customers quantify the potential benefits of the PCSD Genomics Cluster solution, Intel and Seven Bridges Genomics ran a series of performance tests using the Seven Bridges Genomics software platform. Performance for a whole genome pipeline running on the test cluster was compared with the performance of the same software platform running on a 4-node public cloud cluster based on the previous generation Intel Xeon processor E5 v2 family.

 

The subset of the pipeline used for the performance tests includes four distinct computational phases:

 

  • Phase A: Alignment, deduplication, and sorting of the raw data reads
  • Phase B: Local realignment around Indels
  • Phase C: Base quality score recalibration
  • Phase D: Variant calling and variant quality score recalibration.

 

The results of the performance tests were impressive. The Intel Genomic Cluster solution based on the Intel® Xeon processor E5-2695 v3 family completed a whole genome pipeline in just 429 minutes versus 726 minutes for the cloud-based solution powered by the prior-generation Intel® Xeon processor E5 v2 family.

 

Based on these results, researchers and clinicians can potentially complete a whole genome analysis almost five hours sooner using the newer system. They can also use this 4-node system as a building block for constructing large, local clusters. With this strategy, they can easily scale performance to enable high utilization of multiple high-volume, next-generation sequencers.

 

For a more in-depth look at these performance tests, we will soon release a detailed abstract that will provide more detailed information about the workloads and system behavior in each phase of the analysis.

 

What questions do you have?

Dr. Peter White is the developer and inventor of the “Churchill” platform, and serves as GenomeNext’s principal genomic scientist and technical advisor.

 

Dr. White is a principal investigator in the Center for Microbial Pathogenesis at The Research Institute at Nationwide Children’s Hospital and an Assistant Professor of Pediatrics at The Ohio State University. He is also Director of Molecular Bioinformatics, serving on the research computing executive governance committee, and Director of the Biomedical Genomics Core, a nationally recognized microarray and next-gen sequencing facility that help numerous investigators design, perform and analyze genomics research. His research program focuses on molecular bioinformatics and high performance computing solutions for “big data”, including discovery of disease associated human genetic variation and understanding the molecular mechanism of transcriptional regulation in both eukaryotes and prokaryotes. Dr. Peter White.jpg

 

We recently caught up with Dr. White to talk about population scale genomics and the 1000 Genomes Project.

 

Intel: What is population scale genomics?

 

White: Population scale genomics refers to the large-scale comparison of sequenced DNA datasets of a large population sample. While there is no minimum, it generally refers to the comparison of sequenced DNA samples from hundreds, even thousands, of individuals with a disease or from a sampling of populations around the world to learn about genetic diversity within specific populations.

 

The human genome is comprised of approximately 3,000,000,000 DNA base-pairs (nucleotides). The first human genome sequence was completed in 2006, the result of an international effort that took a total of 15 years to complete. Today, with advances in DNA sequencing technology, it is possible to sequence as many as 50 genomes per day, making it possible to study genomics on a population scale.

 

Intel: Why does population scale genomics matter?

 

White: Population scale genomics will enable researchers to understand the genetic origins of disease. Only by studying the genomes of 1000’s of individuals will we gain insight into the role of genetics in diseases such as cancer, obesity and heart disease. The larger the sample size that can be analyzed accurately, the better researchers can understand the role that genetics plays in a given disease, and from that we will be able to better treat and prevent disease.

 

Intel: What was the first population scale genomic analysis?

 

White: The 1000 Genomes Project is an international research project, through the efforts of a consortium of over 400 scientists and bioinformaticians, set out to establish a detailed catalogue of human genetic variation. This multi-million dollar project was started in 2008 and sequencing of 2,504 individuals was completed in April 2013. The data analysis of the project was completed 18 months later, with the release of the final population variant frequencies in September 2014. The project resulted in discovery of millions of new genetic variants and successfully produced the first global map of human genetic diversity.

 

Intel: Can analysis of future large population scale genomics studies be automated?

 

White: Yes. The team at GenomeNext and Nationwide Children’s Hospital were challenged to analyze a complete population dataset compiled by the 1,000 Genomes Consortium in one week as part of the Intel Heads In the Clouds Challenge on Amazon Web Services (AWS).  The 1000 Genomes Project is the largest publically available dataset of genomic sequences, sampled from 2,504 individuals from 26 populations around the world.

 

All 5,008 samples (2,504 whole genome sequences & 2,504 high depth exome sequences) were analyzed on GenomeNext’s Platform, leveraging its proprietary genomic sequence analysis technology (recently published in Genome Biology) operating on the AWS Cloud powered by Intel processors. The entire automated analysis process was completed in one week, with as many as 1,000 genome samples being completed per day, generating close to 100TB of processed result files. The team found there was a high-degree of correlation with the original analysis performed by the 1,000 Genomes Consortium, with additional variants potentially discovered during the analysis performed utilizing GenomeNext’s Platform.

 

Intel: What does GenomeNext’s population scale accomplishment mean?

 

White: GenomeNext believes this is the fastest, most accurate and reproducible analysis of a dataset of this magnitude. One of the benefits of this work will enable researchers and clinicians, using population scale genomic data to distinguish common genetic variation as discovered in this analysis, from rare pathogenic disease causing variants. As populations scale genomic studies become routine, GenomeNext provides a solution through which the enormous data burden of such studies can be managed and by which analysis can be automated and results shared with scientists globally through the cloud. Access to a growing and diverse repository of DNA sequence data, including the ability to integrate and analyze the data is critical to accelerating the promise of precision medicine.

 

Our ultimate goals are to provide a global genomics platform, automate the bioinformatics workflow from sequencer to annotated results, provide a secure and regulatory compliant platform, dramatically reduce the analysis time and cost, and remove the barriers of population scale genomics.

The saying that “life sciences is like a puzzle” has never been more true than it is today. The life sciences are in the midst of a dramatic transformation as technology redefines what is possible for human health and healthcare. That’s why the upcoming Bio-IT World event in Boston, April 21-23, holds so much promise for moving the conversation forward and sharing knowledge that truly helps people.

 

As the show approaches, we’re excited to roll out a new resource for you that offers an optimized compendium of codes with benchmarks and replication recipes. When used on Intel®-based computing platforms, and in concert with other Intel® software tools and products, such as Intel® Solid-State Drives (Intel® SSDs), the optimized code can help you decipher data and accelerate the path to discovery. rubiks-01_v2.jpg

 

Industry leaders and authors of key genomic codes have supported this new resource to ensure that genome processing runs as fast as possible on Intel® based systems and clusters. The results have been significantly improved speed of key genomic programs and the development of new hardware and system solutions to get genome sequencing and processing down to minutes instead of days.

 

Download codes

On the new resource page, you can currently download the following codes to run on Intel® Xeon® processors:

 

  • BWA
  • MPI-HMMER
  • BLASTn/BLASTp
  • GATK

 

If you’re looking for new tools to help handle growing molecular dynamics packages, which can span from hundreds to millions of particles, take advantage of these codes that are compatible with both Intel® Xeon® processors and Intel® Xeon® Phi™ coprocessors and allow you to “reuse” rather than “recode:”

 

  • AMBER 14
  • GROMACS 5.0 RC1
  • NAMD
  • LAMMPS
  • Quantum ESPRESSO
  • NWChem


Solve the cube

Finally, because life sciences is like a puzzle, look for a little fun and games at Bio-IT World that will test your puzzle solving skills and benefit charity.

 

If you’ll be at the show, be sure to grab a customized, genomic-themed Rubik’s Cube at the keynote session on Thursday, April 23, and join the fun trying to solve the puzzle after the speeches at our location on the show floor. Just by participating you will be eligible to win great prizes like a tablet, a Basis watch, or SMS headphones. Here’s a little Rubik’s Cube insight if you need help.

 

Plus, we’re giving away up to $10,000 to the Translational Genomics Research Institute (TGEN) in a tweet campaign that you can support. Watch for more details.

 

What questions do you have? We’re looking forward to seeing you at Bio-IT World next month.

 

Based on what we heard at Supercomputing last month, it’s clear that bio IT research is on the fast track and in search of more robust compute power.

 

In the above video, Michael J. Riener, Jr., president of RCH Solutions, talks about dynamic changes coming to the bio IT world in the next 24 months. He says that shrinking budgets in research and development means that more cloud applications and service models will be implemented. When it comes to big data, next generation sequencing will heighten the need to analyze data, determine what data to keep and what to discard, and how to process it.

 

Watch the clip and let us know what questions you have. What changes do you want to see in bio IT research?

 

With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.

 

Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?

 

What questions about HPC do you have? 

 

If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

What better place to talk life sciences big data than the Big Easy? As temperatures are cooling down this month, things are heating up in New Orleans where Intel is hosting talks on life sciences and HPC next week at SC14. It’s all happening in the Intel Community Hub, Booth #1315, so swing on by and hear about these topics from industry thought leaders:

 

Think big: delve deeper into the world’s biggest bioinformatics platform. Join us for a talk on the CLC bio enterprises platform, and learn how it integrates desktop interfaces with high performance cluster resources. We’ll also discuss hardware and explore the scalability requirements needed to keep pace with the Illumina HiSeq X-10 sequencer platform, and with a production cluster environment based on Intel® Xeon® processor E5-2600 V3. When: Nov. 18, 3-4 p.m.

 

Special Guests:

Lasse Lorenzen, Head of Platform & Infrastructure, Qiagen Bioinformatics;

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

 

Find out how HPC is pumping new life into the Living Heart Project. Simulating diseased states, and personalizing medical treatments, requires significant computing power. Join us for the latest updates on the Living Heart Project, and learn how creating realistic multiphysics models of human hearts can lead to groundbreaking approaches to both preventing and treating cardiovascular disease. When: Nov. 19, 1-2 p.m.

 

Special Guest: Karl D’Souza, Business Development, SIMULIA Asia-Pacific

 

Get in sync with scientific research data sharing and interoperability. In 1989, the quest for global scientific collaboration helped lead to the birth of what we now call the Internet. In this talk, Aspera and BioTeam will discuss where we are today with new advances in global scientific data collaboration. Join them for an open discussion exploring the newest offerings for high-speed data transfer across scientific research environments. When: Nov. 19, 2-3 p.m.

 

Special Guests:

Ari E. Berman, PhD, Director of Government Services and Principal Investigator, BioTeam;

Aaron Gardner, Senior Scientific Consultant, BioTeam;

Charles Shiflett, Software Engineer, Aspera

 

Put cancer research into warp speed with new informatics technology. Take a peak under the hood of the world’s first comprehensive, user-friendly, and customizable cancer-focused informatics solution. The team from Qiagen Bioinformatics will lead a discussion on CLC Cancer Research Workbench, a new offering for the CLC Bio Cancer Genomics Research Platform. When: Nov. 19, 3-4 p.m.

 

Special Guests:

Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

 

You can see more Intel activities planned for SC14 here.

 

What are you looking forward to seeing at SC14 next week?

 

The promise of personalized medicine relies heavily on high performance computing (HPC). Speed and power influence the genome sequence process and ultimately patient treatment plans.

 

With the SC14 Conference coming up next month, we caught up with Carlos Sosa, high performance computing architect at Cray, Inc., to hear his thoughts on the state of HPCs. In the above video clip, he says that personalized medicine is on the way but that HPC technology needs to be more robust to answer questions quickly for patients and doctors.

 

He cites a University of Chicago workflow that used parallel machines to sequence genomes and performed 47 years of research in just 51 hours as an example of moving toward personalized medicine capability.

 

Watch the clip and let us know what questions you have about HPCs and personalized medicine. What are you seeing?

 

In the above video, Cycle Computing CEO Jason Stowe talks about the strong disconnect that exists between research and clinical analysis. He says the current challenge in bio IT is to analyze data, make sense of it, and do actionable science against it.

 

He shares an example of a 156,000-core workload run in eight regions of the globe that produced 2.3 million hours of computational chemistry research (264 years’ worth) in just 18 hours. He says this capability will transform both access patterns and the kinds of research that pharmaceutical, life sciences, and healthcare companies are able to tackle when it comes to analyzing genomes.

 

Watch the clip and let us know what you think. What questions about research and clinical analysis do you have?

Below is the second in a series of guest blogs from Dr. Peter J. Shaw, chief medical officer at QPharma Inc. Watch for additional posts from Dr. Shaw in the coming months.

 

With all the recent advances in tablet technology, the way pharmaceutical sales professionals interact with health care providers (HCPs), and in particular doctors, has changed. Most pharmaceutical companies are now providing their sales teams with touch screen tablets as their main platform for information delivery. The day of paper sales aids, clinical reprints and marketing materials is rapidly fading. The fact is that doctors have less time to see sales professionals during their working day and there are increasing restrictions on access to doctors by many institutions. Therefore, the pharmaceutical industry is having to be more and more inventive and flexible in the way that it approaches doctors and conveys the information needed to keep up-to-date on pharmaceutical, biotech and medical device advances.

 

  • How has this impacted the life of the sales professional?
  • How have pharmaceutical companies adapted to the changes?
  • To what extent has the use of mobile devices been adopted?
  • What impact has this had on the quality of the interaction with HCPs?
  • What are alternatives to the face-to-face doctor visit?
  • How have doctors received the new way of detailing using mobile technology?
  • What do doctors like/dislike about being detailed with a mobile device?
  • What does the future look like?
  • Are there any disadvantages to relying solely on mobile technology?

 

To answer some of these questions, and hopefully to generate a lively discussion on the future of mobile technology in the pharmaceutical sales world, I would like to share some facts and figures from recent research we conducted on the proficiency of sales reps using mobile devices in their interactions with HCPs, and the impact this has had on clinical and prescribing behaviors.

 

  • In tracking the use of mobile devices for the last three years, it is clear that there is variable use of mobile devices by sales professionals.
  • Where sales reps only have the mobile device, they are using them in only 7 to 35 percent of interactions with HCPs.
  • The use of mobile devices increases with the duration of the interaction with HCPs, in that the device is used in almost all calls lasting over 15-20 minutes.
  • Many reps do not use mobile devices in calls under 5 minutes. Often this is due to the non-interactive nature of the content, or the awkwardness of navigating through required multiple screens before arriving at information relevant to that particular HCP.
  • We have data to show that where the mobile device is very interactive and the sales rep is able to use it to open every call, the call will be on average 5-7 minutes longer with the doctor than if it is not used.
  • In cases where doctors will take virtual sales calls, these calls are greatly enhanced if there is a two-way visual component. Any device used in virtual sales calls much have a two-way video capability as the HCP will expect to see something to back up the verbal content of the sales call.
  • Most doctors feel that the use of mobile technology in face-to-face calls enhances the interaction with sales reps provided it is used as a means to visually back up the verbal communication in an efficient and direct manner.
  • Screen size is the main complaint we hear from HCPs. Most say that where the rep is presenting to more than one HCP the screen needs to be bigger than the 10” that is on most of the devices currently used by reps.

 

The mobile device is clearly here to stay. HCPs use them in their day-to-day clinical practice and now accept that sales professionals will also use them. When the mobile device is expected to be used as the sole means for information delivery, more work needs to go into designing the content and making it possible for the sales professional to navigate to the information that is relevant to that particular HCP. All aspects of the sales call need to be on the one device; information delivery, signature capture and validation for sample requests, and ability to email clinical reprints immediately to the HCP are just the start.

 

In part 2, we will look at how sales reps are using mobile devices effectively and the lessons to be learned from three years of data tracking the use of these devices and the increasing acceptance of virtual sales calls.

 

What questions do you have?

 

Dr. Peter J. Shaw is chief medical officer at QPharma Inc. He has 25 years of experience in clinical medicine in a variety of specialties, 20 years’ experience in product launches and pharmaceutical sales training and assessment, and 10 years’ experience in post-graduate education.


For genomic researchers, speed and cost drive their day-to-day data generation activities. When given a choice, most researchers will choose longer wait times for results if it means they can achieve more samples. 

 

In the above video, Tim Fennell, director of bioinformatics for the genomics platform at The Broad Institute, talks about the organization’s genomic data generation and data processing pipelines, plus how that data is provided to researchers in the community. He says that the value in genomic research is how quickly and how inexpensively research analysis can be executed.

 

What do you think?  Does speed or cost win in your data analysis projects?

 

When it comes to bio IT, data is the key that drives progress. In the above video, Pek Lum, vice president of solutions and chief data scientist at Ayasdi, and Mikael Flensborg, director of global partner relations at CLC Bio, talk about how to make big data into small data so that it’s accessible to physicians and can be leveraged to tackle complex issues like cancer.

 

Watch the clip and let us know what questions you have about big data in bio IT.

JULIE MALLOY

Big Health

Posted by JULIE MALLOY Aug 18, 2014

Below is a guest post from Kyle H. Ambert, PhD, Intel Graph Analytics Operation.

 

Trends come and go, in the analytics world. First, everything is supercomputing, then everything is distributed computing. SQL. NoSQL. Hadoop. Hadoop! HADOOP! And then, Spark makes its way onto the scene, changing everything yet again. Navigating this alphabet soup of analytical spare parts is enough to make even the most devoted of data scientists wish they had listened to their respective mothers and become physicians.

 

As a graduate of Oregon Health & Science University School of Medicine, I lived at the forefront of where big data technology meets healthcare, while researching the biomedical and clinical applications of artificial intelligence, or “big health,” as I liked to refer to it. Like most data scientists, while there, I found myself spending a great deal of my time writing code to simply acquire data sets, format them in a sensible way, and remove uninformative or misleading information it may contain. This, most would agree, is what's referred to as "the essential pre-processing steps of data analysis," or, "the boring stuff," in technical parlance. The "development and application of analytical algorithms", or, "the reason I got into this business in the first place," was often relegated to an unfortunately modest fraction of my day.

 

An Experienced Programmer reading this is likely to observe, "well, the obvious solution to your problem is to write a software library abstracting out the repeated steps in your so-called boring stuff." Astute as ever, Experienced Programmer, but what of the ever-increasing population of domain experts who need to gain insights from their own data, but don’t write code? What of the physician who wants to examine the relative rates of diabetes diagnoses in their practice over time? Will you be the one to look the population geneticist writing a meta analysis in the eye and say, "I'm sorry, but if you want to do a large-scale text-mining study of the publications in your field, you're going to have to learn to program on the streets"? I couldn't do it.

 

That's why, when I was given the opportunity to join Intel's Graph Analytics Operation to guide the development of the Intel Analytics Toolkit (IAT) for end users in biomedicine, I jumped at the chance. Graph analytics enables users to analyze data using methods that take into account the relationships inherent in their data. With IAT, we enable biomedical researchers and physicians to use this technology to gain insight from networks of biomedical information. Developed with scalability in mind, we've protected the user from the laborious steps of working with big data in a distributed environment, creating an intuitive user interface to a suite of powerful analytical tools.

 

The real power here, for the aspiring data scientist, is that all the tools needed for importing, cleaning, storing, and analyzing data are all in the same place—no more writing code to connect an xml parser to a database; no more figuring out how to write analyses that efficiently scale to big data, or that are happy to work in a distributed environment—we’ve taken care of that for you. This, we’ve found, drastically decreases the time spent in the monotonous steps of data analysis, letting analysts focus on understanding their results—the reason they got into their business in the first place.

 

This month, we began a limited trial of the IAT, and we're partnering with university hospitals, private medical research organizations, and health insurance companies to better understand the needs of the biomedical and clinical communities, in terms of scalable data analysis. What we're already learning is that there is a huge need in the medical community for large-scale graph analytics, particularly when it comes to developing an integrated representation of heterogeneous data types—such as are found in electronic health records, or are used to inform Clinical Decision Support systems.

 

What questions do you have? To learn more about the IAT, watch this video, or see intel.com/graph. And, of course, if you have a biomedical data analysis problem you'd like to work on with us, or if you’d like to join the limited trial, leave a comment below.

Healthcare technology covers a wide continuum—from clinician/patient interaction using tablets and smartphones to research scientists analyzing genomic data to discover new personalized medicine strategies. All of these healthcare-related activities rely on computing power and are more connected than ever.

 

That’s why our community is expanding its coverage to include more content focused on the extensive range of healthcare activities that integrate technology for the betterment of patient care. Today, we are announcing the Intel Health & Life Sciences Community as the new umbrella name for this destination that will explore all things related to health IT and bio IT technology.

 

What does this mean for you? In addition to the healthcare IT device blogs and videos that have been the main focus the past few years, you’ll also see and hear about how genomic research, big data, wearables, and high performance computing impact clinical interactions and individual treatment plans. It’s vital that healthcare CIOs, administrators, clinicians, and researchers understand and connect the dots on these important topics, so our role is to help facilitate these conversations and provide educational stories that can help you in your jobs and daily workflows.

 

Issues that impact both health IT and bio IT—like security, cloud, mobility, interoperability—will continue to be dedicated topics that we’ll explore regularly with both Intel experts and your peers. In addition, the community will present blogs and videos from the bio IT ecosystem that focus on foundational industry topics such as compute power, sequencing, and personalized medicine.

 

Because of this new focus, our Twitter handle is changing to @IntelHealth to reflect the broader scope of content. Follow the handle for the latest links to new content and to interact with others in the community.

 

It’s a great time to be in the healthcare technology arena. We’re excited about this expansion of healthcare technology coverage and welcome your feedback and contributions. If you have suggested topics you’d like to see covered, or would like to contribute a guest blog, please let me know. Leave a comment below or send me a note here in the IT Center.

 

Also, be sure to sign up to receive email newsletters and communications from the Intel Health & Life Sciences team and be part of the conversation.

 

What questions do you have?

 

Julie Malloy is the Life Sciences Industry Manager at Intel Corporation. See her other posts here.

 

Genomic testing is becoming mainstream for patients. Companies like 23andMe are bringing affordable testing options to the market and allowing patients to learn more about their genetic makeups. This leads to better care and treatments.

 

If you ask researchers about this trend, they’ll most likely say that the real value of more people participating in genomic testing is that it provides an opportunity to test their theories through analysis of de-identified genomic data of certain patient sets.

 

In the above video, Steve Schwartz, vice president of business development and strategy at 23andMe, talks about the company’s approach to bringing genetic testing to the masses, and how data can be useful in research that ultimately can improve patient care.

 

Watch the clip and let me know what questions you have. What value do you see in the availability of more genomic data?

Below is a guest blog from Ketan Paranjape, director of personalized medicine at Intel.

 

There has been a lot of news recently about cloud deployments in life sciences and genomics. With the push towards taking genomics mainstream through clinical deployments, cloud computing may not be something you think about right off the bat. With all the privacy and security rules, like the EU's General Data Protection Regulation, or the U.S. Health Insurance Portability and Accountability Act (HIPPA), naturally you are concerned and want to stay local and on-premise.

 

There is therefore a need for turnkey "appliances" that can operate independently with and without the cloud. Last year, BioTeam and the Galaxy Project formed a strategic alliance and introduced the SlipStream Appliance: Galaxy Edition -- a high-performance, server class device pre-loaded with a fully operational Galaxy analysis platform. By using SlipStream Galaxy, the average lab can save up to one month of deployment time with start-up cost savings (typically charge-backs to IT department) that are easily more than $20,000. The SlipStream Appliance is architected to deliver power, expandability, and affordability.

 

Today, we are announcing a strategic partnership between BioTeam, Intel and SGI to roll out a new version of the SlipStream Appliance. The system will contain - 2x 10-core (20 cores total) Intel® Xeon Ivy Bridge E5-2600v2, 512GB to 1TB ECC RAM, 2x120GB Data Center SSD + 8x4TB SAS 6Gbps Enterprise HDD and Dual-port 10GbE standard. Details on the software and support categories for this Appliance can be found here.

 

With the increasing throughput of data generation instruments, the dynamic landscape of computational tools, and the variability in analysis processes, it is challenging for scientists to work within the confines of a static infrastructure. At the Galaxy Community Conference this week in Baltimore, BioTeam will discuss some of these challenges and the technical advances they have been working on to build a more flexible Galaxy Appliance to support the changing compute and analysis needs of the scientific researcher. James Reaney, Senior Director at SGI, will also be giving more details on the SlipStream Appliance.

 

From a recent article I read by Joe Stanganelli on Cloud Security FUD Drives Genomics Industry towards Cloud-in-a-Box: "Of course, the choice between cloud computing and on-premises processing is not mutually exclusive. Cloud security is a worry, but so are the scalability and cost of on-premises devices. Local processing consoles that can work independently of the cloud or be cloud-enabled offer the best (and worst?) of both worlds."

 

Regardless, any decision about whether to go to the cloud or the "anti-cloud" (or both) must involve serious cost-benefit analysis.

 

What questions do you have about cloud computing in life sciences?

Filter Blog

By date:
By tag: