Skip navigation

In enterprise environments, people are getting serious about cloud computing. An IDC survey found that 44 percent of respondents were considering private clouds. So what’s holding people back? In a word: security. To move to a cloud (private or public) environment, you must be sure you can protect the security of applications and the privacy of information.


These requirements are particularly rigid if you are subject to PCI-DSS regulations for credit card transactions or HIPAA (Health Insurance Portability and Accountability Act) regulations for medical records. Compliance depends on your ability to maintain the privacy of the information, generally through isolation of storage systems, networks, and virtual machines.


To achieve this level of security, an “air gap” is often used to ensure sensitive systems are isolated. This approach works but severely limits your flexibility and ability to adapt to changing conditions. So perhaps we should consider instead a “virtual air gap.” Let’s look at how you might maintain this virtual separation of systems.


Storage isolation: One way to implement storage isolation is to encrypt data when it is in motion and at rest in the cloud environment. Another best practice is the striping of data across systems. This approach breaks blocks of data into multiple pieces that are spread over different disk drives that exist in different administrative zones. This helps protect you from rouge admins, who could access only a fraction of a file, rather than the whole.


Network isolation:Sensitive applications should be placed on a controlled VLAN. You then put mechanisms in place to monitor the configuration of routers and switches to verify that no unauthorized changes have taken place.


Virtual machine isolation: Virtual machines implement the “air gap” but the quality of the gap is only as good as the versions of hypervisor and the configuration. But how can cloud providers prove that they are using the expected versions on the expected hardware? Using a hardware-based root of trust to provide the evidence of hardware and software is a powerful tool for this challenge. A hardware root of trust provides a hardware-level mechanism to attest to the configuration of the hypervisors and enable the isolation and safe migration of virtual machines (to other trusted platforms).


Audits:Having a sound security practice is good but in reality we have to implement an audit to sample the point-in-time processes and technology. Standards such as ISO 27002 for information security and SAS 70 for maintenance of internal controls can help. Also, the Cloud Security Alliance has a solid collection of best practices for security in the cloud.


At a high level, these are just some of the steps you can take to implement and maintain a “virtual air gap.”

Over the past decade I have worked with many of the great technologists and companies of the x86 Virtualization era on developing new innovations in Virtualization technology. This work has been very rewarding in introducing new technologies Intel VT, Intel VT for Connectivity and Intel VT for Directed I/O, along with a host of lesser known but equally important virtualization technologies from Intel. I have seen the rapid increase in the BFBC (Bigger, Faster, Better, Cheaper) technologies to improve performance of virtualization technologies while delivering industry breakthroughs in data center and client efficiency.  During this time many of us began to research the “Next Big Thing”, now known as Cloud Computing. At the time of many of these discussions on how to enable “Cloud Computing” some of us argued that Cloud Computing was merely an extension of the Virtualization “foundation”, the next phase in Virtualization BFBC, if you will. However, as we begin to embark on the next decade in computing I believe now, we were amiss. Cloud computing is not merely an evolutionary step in the development of virtualization technologies; it is a paradigm shift in compute architecture development. Let me explain why I believe building a “virtualization foundation” was a single step in a direction, not a foundational element. More precisely, virtualization is a single step on our Computing journey, one of many that has become another secret to unearth.


At the core value of virtualization lays utilization of system resources. Many of the key virtualization technologies provide increased efficiency in the use of more CPU cores, more I/O bandwidth, more memory channels and reducing performance overheads of the hypervisors needed to effectively deploy virtualization technologies.  In essence, BFBC for the many-core era in which we live today. To deliver efficiency in compute, the industry led by Intel, has delivered more compute, memory and I/O capabilities with each generation of our technologies. In some cases, our competition has led the way and Intel has worked quickly to catch up…..though I certainly believe we have been leading for the last several years. However, utilization design methodologies can result in unintended consequences. Storage replication, virtual machine management and rapid application deployment can create their own series of expense for IT organizations that previously had a “firmer” grasp on these issues in the Client Server era (i.e. One Server, One O/S, and One Application). Did the industry intend to create these new concerns when we developed the initial hypervisor and VM management technologies? Not at Intel, nor most of the industry leaders whom I have the privilege to work with every day. Yet unintended consequences, lead to new opportunities, new challenges and in this case I believe a whole new era of Computing.


Despite investment markets “irrational exuberance” with all things Cloud there are some fundamental “truths” which the investment community has gotten correct. Cloud Computing is a VERY BIG DEAL. Cloud computing will change the way in which data centers are deployed for the next decade. In the coming years, Cloud Computing and virtualization will change the way in which clients and applications migrate from hardware platform to hardware platform. I believe Intel Architecture has a distinct advantage in our programming flexibility to scale from handheld to Data Center. However, despite Intel’s Vision, Investment and commitment to Cloud computing our work will go far beyond virtualization and utilization technologies. Our journey (an overused term in Cloud by the way) has just begun. Cloud is going to force us to reexamine ourselves as a company, and reinvent ourselves in what is increasingly, as Intel CEO Paul Otellini has made clear, a Compute Continuum. Is that enough? Is it sufficient to deliver seamless efficiency of Virtual Machines to migrate from handhelds to Data Centers? For all users around the world (over 4 billion by the end of this decade, read previous blog on predictions for the next decade) to access their User environments regardless of the manufacturer, regardless of the device, regardless of the network topology? Is that Cloud Computing? Well…it’s a start.


Like Virtualization, Cloud Computing’s next generation of solutions and technologies will have unintended consequences that will continue to force us to reexamine our design methodologies in silicon, software and systems management. These unintended consequences and the investments in research and development to examine their effects on the Compute Continuum will determine Cloud Computing’s future history. BFBC has been an industry trait; it has been a key driver to our growth, success and profitability….beyond Cloud Computing, beyond virtualization, beyond Moore’s Law, beyond Metcalfe’s Law, lies a new frontier in the Compute Continuum that will once again force us to examine that our tenuous “foundation” was fleeting and to move forward…one step at a time.



Let me know what you think, your thoughts, comments and opinion are always welcome.....

Filter Blog

By date: By tag: