As Bio-IT World approaches next week, we are sharing a pre-show guest blog series from industry experts on trends you can expect to hear about at the event. Below is a guest contribution from Phil Eschallier, director of managed services at RCH Solutions.

 

In supporting research computing in life and material sciences, it’s clear that most pharmas and biotechs are pushing toward hosting and computing in the cloud. Is this prudent or wise or strategic? Let’s answer this, for now, with a definite “maybe.” Or, perhaps better viewed as a tool in the arsenal but not a panacea.

 

Let’s face it, the cloud is alluring and attractive for many reasons: it offers the utmost in flexibility with essentially no start-up (capital) costs; there are offerings of predefined (already engineered) services; it should have APIs to facilitate the automation of provisioning or scaling, and one can use a company credit card to facilitate (or perhaps skirt) the process. Lastly, provisioning servers via global IS organizations is often measured in “months,” but provisioning in the cloud can be measured in “days” or “weeks.”

 

Clearly, where temporary scale is the deciding factor, traditional computing hosted in corporate data centers and the CAPEX procurement model cannot compete with the cloud. But when demand is identified as a need over time, Gartner, US Government News, and others tell us that the cloud is notably more expensive (though the various sources reporting on cloud expense are not aligned). After all, cloud providers aren’t magic—they have to purchase compute / network / storage scale, amortize those spends over a defined period of time, and then sell it to others while still making a profit. In paying for a full-time resource, the price-tag has to include the cloud providers’ capital and operational costs plus their profits.

 

Will the move to the cloud always yield faster results? If your data remains within corporate firewalls for security or legal reasons, moving data (especially big data) to cloud compute platforms adds time to analysis runs and complicates the security model. And where is your data? Is it easier to bring your data to the computing or is it easier to bring computing to your data? And what happens to any intellectual property pushed to the cloud after the compute jobs are completed?

 

“Vendor Lock-in” has to be a factor in defining cloud strategy. Once entrenched in a vendor’s cloud offerings it will not be easy (or cheap) to migrate to another provider.

 

Finally, are those wielding credit cards in the business positioned to cost-effectively engineer solutions in the cloud? Some applications benefit from more CPU, others from faster storage, or more memory may be needed. Others from network tuning. Obviously, there are applications that need performance from all facets of the underlying infrastructure. One example, though common, is that an application is hamstrung by disk I/O. It’s cheap and we have a credit card, shall we just spin-up another VM? Ultimately, those paying the credit card bills may want confidence that what “spun-up” was used well.

 

The cloud is a tool in the toolbox. But if a business relies on heavy computational cycles or big data, it may not yet be time to promote the cloud from tool to being the entire toolbox. The cloud can serve businesses needing a public web presence or web services outside the corporate firewall. It can be a fantastic platform on which to prototype or pilot solutions, plus a fiscally responsible solution option for intermittent compute needs or when the needed scale [unpredictably] varies. However, if needs are well-defined over time or intellectual property is a concern, computing in the controlled environment of the corporate data center should be more cost-effect and secure than the cloud.

 

No sure how to proceed? Consider engaging a subject matter expert before deciding on the cloud vs. corporate data center -- a small up-front cost should help insure that budget computing solutions are monies well spent.

 

What questions do you have?