So I admit it .. at heart I'm a geek.  I love geeky things.. High performance computing, workstation applications that do dynamic 3D modelling, HDTV.  But I am also a recovering academic.. I have studied and taught marketing and strategy for about10 years.. Love that too.

 

Now.. I spend a great deal of my time.. travelling around and talking to people about the really cool things they do, and how intel might help them.  So.. yep.. I'm a wandering geek.  Last year: Beijing, Shanghai, Tokyo, All over the US, London, Paris, Hamburg, Barcelona, Munich, etc.. Mostly northern hemisphere... have to work on that.

 

I am thrilled to see the amazing things people are doing with our technology.  the guy modelling race cars, the folks making football helmets safer (met Drew Brees at SC10), the folks who are reinventing medicine, finding new sources of energy (and under utilized old ones).. Weather modelling and forecasting in the pacrim ... someday someone will take on snow removal in boston with HPC... but not today.

 

I was really happy to see Dr. Stephen Wheat get listed as a person to watch in HPC.  I assume HPCwire meant that metaphorically...  While somewhat visually unpretentious, Dr. Wheat is a passionate advocate for his industry, his company and his country.  There are interesting people to FOLLOW in this industry.. I am blessed to work with a cadre of them.

 

Congrats Dr. Wheat.. see you around the world.Tokyo-Arrival_ANiceNightforWalking_03-09-10_0303.jpg

photo credit: Tom Metzer.. "A Nice night in Tokyo" Spargo left.. Geek right

itc_cs_granarolo_xeon_carousel_preview.jpgChoosing the right processors for a power-hungry data center is serious business. And for Indiana University, UGF, and Granarolo, the serious choice was Intel® Xeon® processor 7500 series.

 

Indiana University’s Information Technology Services group supports 110,000 students and faculty. Looking to replace all of its existing servers, the University made an exhaustive year-long search for the power and flexibility it needed. The choice was clear: Intel Xeon processor 7500 series.

 

“New features that Intel came up with led us down the Intel Xeon processor 7500 series route,” said Mike Floyd, chief system architect at Indiana University. “Eight-core technology, DDR, the RAS technologies, reliability and availability for a server of any type is just paramount. I mean, if we’re down, they don’t get to teach.”

 

Italian financial services group Unipol Gruppo Finanziario (UGF) had grown through a series of mergers and acquisitions to the point where its data center needed a total overhaul. It decided to create a virtual server farm across physical environments, benchmarking the Intel Xeon processor 7500 series for its performance and virtualization potential.

 

“This Intel Xeon processor delivers much more than you expect,” explained Morgan Travipf of IGF. “In fact, it is closer to a RISC processor in terms of recovery functionality and high reliability and it runs smoothly 24/7.”

 

Granarolo, one of the biggest names in the Italian food industry, needed to implement a cloud-based platform for communication with and management of its external supply chain members. It implemented servers powered by Intel Xeon processors 7540 and 5540. With a more effective process management model, Granarolo can deliver higher-quality products to market faster than any of its competitors.

 

“The Intel® technology-based solution we have adopted has allowed us to achieve high quality and immediate results in our data analysis,” said Dr. Andrea Breveglieri, general manager of Granarolo.

 

For the whole story, watch our new Indiana University video and read our new UGF and Granarolo business success stories. As always, you can find these, and many more, in the Intel.com Reference Room and IT Center.

In last week's post, we looked at a few of the key architectural requirements for a scale-out storage infrastructure, including a unified 10Gb Ethernet (10GbE) network that supports diverse protocols. Today, we’ll look at some of the business benefits of scale-out storage.

 

Lower capital cost—Scale-out storage arrays typically cost less than enterprise-class storage. On a price/terabyte basis, some scale-out arrays sell for half the price of conventional enterprise storage systems. When you’re adding storage often to keep pace with data growth, these upfront savings become more important.

 

Lower operational costs—Scale-out storage architectures can deliver operational savings via management simplicity. With a unified 10GbE network, you have fewer protocols to manage and fewer management tools to buy, use, and maintain. And the tools you use tend to be lower-cost, thanks to open standards.

 

High availability backup and recovery—Scale-out storage can provide continuous high-availability backup and recovery, as you would get with more expensive storage. You’re just getting the benefits at a better price. Scale-out storage allows you to keep your backup data more accessible than it would be with conventional tape-based backup and an offsite data infrastructure. With near-real-time data, you’re poised to quickly bounce back from a failure or a disaster. Think hours instead of days.

 

Cloud benefits—Scale-out storage allows you to deploy a private cloud that delivers the cost benefits that Tier 1 public cloud providers realize. And when you have your storage in a cloud environment, you are better positioned to eventually move non-critical content to a public cloud. All that video you’re storing? How about moving it to a public cloud?

 

Examine scale out storage solutions from EMC, NetApp, Compellent and others. You may find some surprising results. … and if you don’t, let us know why not?

What do the phrases “My datacenter power bill is too high” and “We have an energy crisis in our data center” have in common?  The answer is that each gets the meaning of the words POWER and ENERGY wrong. The words ENERGY and POWER are so commonly misused it can lead to confusion what people are actaully talking about. I thought I’d write down some thoughts about what they mean, or should mean, to the CIO.

 

Why should the CIO care? Because both POWER and ENERGY drive data center Total cost of Ownership (TCO) along parallel, but different tracks. Understanding the difference helps with prudent usage to distinguish different cost drivers. You'd do one thing to fix a power problem in a data center, and quite another to reduce energy costs.

 

Why does the CIO care about POWER?  Because POWER drives Capital Cost. Power is (roughly) how many electrons you are pushing through wires, into transistors and ultimately turning into heat that needs to be dissipated. More electrons means more POWER. The greater the POWER, the bigger the substation, the more capacity your UPS system will need, the more wiring infrastructure you need and, likely, the more servers you will be installing. The greater the POWER, the more air handling equipment you will need. Think of the cost of POWER as they cost of capacity; this typically runs $5 to $15/Watt for a modern data center. The catch? Once you buy the capacity, you have it (and pay for it) whether you use it or not. On the other hand, if you run out of capacity you’ll need to buy more, which is expensive and possibly disruptive to business.

 

What does the CIO care about ENERGY? Because your ENERGY bill is likely the biggest operational cost line item for your data center. People often use the rule of thumb like “a Watt’s a buck.” It’s a nonsense statement, but what they likely mean is that a Watt consumed over a year (which equates to 8.8kWh of ENERGY) times a nominal ENERGY cost of $0.11/kWh is $0.97 - about a buck. Unlike power you only pay for the ENERGY you use. So for instance, by refreshing your data center with more energy efficient modern servers, you should see a big decrease in your ENERGY bill even if your power infrastructure remains unchanged.

 

This kind of thinking can help us decode words like Power Efficiency (which, when substituted for energy efficiency, I've never liked) and Energy Efficiency (which has a well defined meaning). In the above context, Power Efficiency for a data center might reflect something like how much of the (paid for) power infrastructure is actually being used. A metric for stranded capacity. Energy Efficiency is well defined as the amount of work produced by your data center per the energy being consumed (and paid for).

 

So there you have it. ENERGY and POWER. Although often used interchangeably in the vernacular, they have, in reality, very different meanings. Some might say the difference is splitting hairs. But the reality is they contribute in very different ways to the cost structure of the data center. And I would argue that being precise about what we mean when it comes to the cost structure of a multi-million dollar data center is far from an academic exercise.

 

So that’s why the difference between ENERGY and POWER matter to the CIO.

 

Does that make sense? Did I get it right?

Skyrocketing data growth shows no signs of letting up. An IDC study published in May predicted the amount of digital information created annually will grow by a factor of 44 through 2020.[1] The study also found that the number of files, images, records and other digital information containers will grow by a factor of 67, while the number of IT professionals will grow by just a factor of 1.4. Can you do more with less (for the fifth straight year)?

 

These and similar findings underscore the need for storage architectures that can scale out quickly, easily, and affordably to absorb ever-larger amounts of data. This is particularly true if you’re operating a cloud computing environment that can grow in unpredictable ways.

 

So let’s look at a few design guidelines for your scale-out storage architecture.

 

Storage interfaces—Given the nature of scale-out storage and the explosion of storage requirements, it’s imperative to have interfaces that connect easily and seamlessly to your network. These interfaces should allow you to scale out with network-attached storage (NAS), direct-attached storage (DAS), iSCSI and network file storage (NFS)—whatever is best for the application. However, make sure you provide enough flexibility and interoperability to move to the “next” innovation.

 

Unified networking—A converged network based on 10Gb Ethernet (10GbE) has become the de facto standard for scale-out storage infrastructure. 10GbE allows you to leverage multiple storage interfaces and protocols, including Fibre Channel over Ethernet (FCoE) for connectivity to existing storage area networks (SANs), as well as NAS, DAS, NFS and iSCSI. 10GbE also provides interfaces to Microsoft, Oracle, VMware and other application environments.

 

Storage-object flexibility—Your network must have the flexibility to move diverse storage objects freely throughout your environment. This includes a range of new object types, such as video and tagged pictures. You also need to be able to work with tools from Microsoft, Cisco, EMC, Google and a host of open-source providers.

 

Storage architectures have not changed fundamentally in well over a decade but there are changes coming. These changes will maximize system administrator flexibility, tools and scale. I have always found that “understanding the future before it becomes today” provides you with the strongest set of tools to deliver for your customers (internal and external) and stakeholders.

 

Remember the adage, “There is no such thing as a free lunch”? In storage architectures that has never been more true than today. Capacity, flexibility and scale have a cost. Reduce your capital cost as much as possible today and invest in standards, people and process for tomorrow.

 

These architectural guidelines help ensure that your storage architecture won’t become a hindrance to the scaling and performance of your data center infrastructure—even when data growth is skyrocketing.

 

Are you ready? Let us know your thoughts….

 



[1] Source: “The Digital Universe Decade – Are You Ready?” IDC study by John Gantz and David Reinsel. May 2010.

The power to reduce rack space. The power to go green. The power to help customers instantly access and understand the information they need. Where does it all come from? For two companies, the answer is the Intel® Xeon® processor 5600 series.

 

One of India‘s leading financial services companies, India Infoline Ltd. (IIFL) offers everything from equity research and derivatives trading to mutual funds, bonds, and investment banking. In a recent technology refresh and consolidation, the company migrated its servers to the Intel Xeon processor 5600 series and implemented Intel® Virtualization Technology, greatly enhancing its data center computing capability while taking a step forward in the company‘s Green IT initiative.

 

“With the Intel Xeon processor 5600 series-based servers, we created 80 virtual servers that helped reduce rack space and energy usage in our data center,” explained Kamal Goel,  vice president of technology for India Infoline Ltd.

 

Endeca offers search and business intelligence technology to help customers gain fast, easy access to information. To maximize the performance of its software, Endeca switched to servers with the Intel Xeon processors 5600 series, gaining 10 times the performance of its previous architecture while using less hardware.

 

“The Intel Xeon processor 5600 series provides the raw compute performance and large memory bandwidth needed to deliver outstanding throughput for the Endeca search engine,” says Adam Ferrari, CTO of Endeca.

 

For the whole powerful story, read our new India Infoline Ltd. and Endeca business success stories. As always, you can find these, and many others, in the Intel.com Reference Room and IT Center.

Download Now

 

CERN openlab.jpgCERN openlab is a framework for evaluating and integrating cutting-edge IT technologies and services in collaboration with industry, focusing on future versions of the World-Wide Large Hadron Collider Computing Grid* (WLCG*). Working closely with leading industrial players, CERN acquires early access to technology before it’s available for the general computing market segment.

 

CERN openlab recently tested an Intel® Xeon® processor E7 4870-based server for use with its Large Hadron Collider* (LHC*) and infrastructure services and found a significant leap in performance. The Intel Xeon processor E7-4870 delivered throughput gains of up to 39 percent when using HEPSPEC06 benchmark. Energy efficiency also increased by up to 39 percent.


“We count heavily on continued improvements in throughput per watt to satisfy the ever-growing demands of the physicists associated with the Large Hadron Collider and its four experiments,” explained Sverre Jarp, CTO of CERN openlab.


To learn more, download our new CERN openlab business success story. As always, you can find more like this on the Intel.com Business Success Stories for IT Managers page.

 

 

*Other names and brands may be claimed as the property of others.

DELL has created a triple-play setup of servers that enables Intel Intelligent Power Node Manager right out of the box.  You get the best multi-core performance in the PowerEdge C Series servers with the six-core Intel® Xeon® 5600 Series processors in conjunction with a myriad of choices in RAID configurations for local storage; and RAM scaling options up to 144GB; even in a 1U rack footprint.

 

 

PEC-Servers.JPG

 

 

But I don't want to get ahead of myself...

 

 

Stay tuned for a review of these servers and how to setup Node Manager for power and thermal monitoring, power capping and group management.

If you’ve taken any psychology, you’re probably come across Maslow’s hierarchy of needs. This landmark model, often illustrated with a pyramid, explores human needs, from the most basic physiological level to the ultimate state of self-actualization.

 

While I won’t promise you anything quite so lofty, I will suggest that Maslow’s hierarchal approach to understanding human needs creates a workable model for understanding the power management needs in your data center. So let’s walk through this power hierarchy of needs.

 

Efficient equipment—As a first step, use efficient servers, storage systems, and networking devices. For example, better motherboard designs can increase thermal efficiency and allow fans to run at lower speeds. And integrated power gates within a CPU can allow individual idling cores to drop to near-zero power consumption.

 

Efficient facility designs—Design and modify your data center facilities to conserve energy and make optimum use of your cooling and air handling systems. One basic step is to use hot and cold server aisles so you don’t mix hot exhaust air from servers with cool air from the chiller.

 

Consolidated systems—Use virtualization or other techniques to consolidate your environment to a smaller number of better-utilized systems. And then turn off the power to all those unused systems.

 

Power capping—Place power caps on underutilized systems. With the right tools and systems, you can throttle system and rack power usage based on expected workloads. This capability, in turn, can allow you to place more servers in your racks, to make better use of both power and space.

 

Workload optimization—Use intelligent workload placement to improve thermal dynamics and optimize energy usage. The idea is to dynamically move workloads to the optimal servers based on power policies.

 

If you take all of these steps, I can’t say that you’ll reach a self-actualized state, as in Maslow’s hierarchy. But I can promise that you’ll be operating a more efficient data center and making better use of your power dollars.

I’ve  been doing a fair amount of travel for speaking lately for speaking engagements. One of the great things about all that time on the road is getting direct feedback and information about what people are thinking about.

 

One of the areas I’ve been helped is in crystallizing some ideas about actionable  priorities for fixing data center energy efficiency. Most of the problems aren’t really all that new, but I think we sometimes get hung up on different interpretations of the same message. So I worked with some folks to get a kind of “simple” mantra we can all remember.

 

What did we come up with? Organize, Modernize, and Optimize.

 

Org Mod Opt.bmp

 

 

Let me talk about the motivation and thinking for each one:

 

Organize: One of the biggest problems (still!) in the data center is getting started. Making just a simple change to organization responsibility of ensure data center owners are responsible for the site energy consumption is probably the most impactful long term change a CEO or CIO can make. Beyond that, measuring costs, power consumption, and data center productivity are about all you need to start making the right decisions.

 

Modernize: The next step is to make sure the IT equipment in the data center is a efficient as possible. In many cases, this in and of itself can make a tremendous difference in energy consumption. For instance Television Suisse Romande just reduced the number of servers by about 50% through consolidation.

 

Optimize: The last big step is making the entire data center run as efficiencly as possible. Why do this last? Well, you can’t really optimize it if you can’t measure it, so you need to get organized. And if you have missed the opportunity to reduce the number of servers by 50%, why take 10% off your PUE when, in a few months when you do need to replace those servers, you’ll just have to do the work again. For example in the Datacenter2020 collaboration the results indicated cooling did not need to be as high as initially anticipated.

 

So there you have it. Three priorities for data center optimization that you can remember:

  • Organize
  • Modernize
  • Optimize

 

Your call to action as CIO: don’t let your people talk to you about progress on the second or third step until you are comfortable with progress on the preceding one. It will dilute accountability and results.

Download Now

 

outscale.jpgOutscale is a recently-formed provider of software-as-a-service (SaaS) and low-latency infrastructure-as-a-service (IaaS) over the cloud. The company was formed by industry experts Laurent Seror and David Gillard, who each have over 13 years of experience in Web hosting. They both founded Agarik, a critical Web hoster, which was bought by hardware giant Bull in 2006. The French duo wanted to use their experience to enable customers to reinvent their business models and take advantage of service flexibility, high performance, and reduced total cost of ownership (TCO) and carbon footprints provided by cloud-based services. Determined to find the best technology for Outscale, they chose Cisco Unified Computing System powered by the Intel® Xeon® processor L5640.


“We were targeting performance, cost, and carbon footprint,” explained Laurent Seror, CEO of Outscale. “All these criteria came together in the Cisco Unified Computing System and the Intel Xeon processor L5640.”


To learn more, read our new Outscale business success story. As always, you can find this one, and many others, on the Intel.com Business Success Stories for IT Managers page.

 

 

*Other names and brands may be claimed as the property of others.

Intel Ethernet Server Cluster Adapters support iWARP, or RDMA over Ethernet which enables high-performance, low-latency networking.  But to take advantage of this, applications must be written to use an RDMA API.  Many HPC applications are already enabled for RDMA, but most datacenter applications are not.   A “hello world” programming example on the Intel site provides good guidance on how this can be accomplished, but the OpenFabrics Alliance is now providing a unique opportunity to learn how to code to an RDMA interface by hosting an RDMA programming training class.  This is definitely worth looking into if you are an application writer familiar with a Sockets interface and interested in how to take advantage of iWARP.

In enterprise environments, people are getting serious about cloud computing. An IDC survey found that 44 percent of respondents were considering private clouds. So what’s holding people back? In a word: security. To move to a cloud (private or public) environment, you must be sure you can protect the security of applications and the privacy of information.

 

These requirements are particularly rigid if you are subject to PCI-DSS regulations for credit card transactions or HIPAA (Health Insurance Portability and Accountability Act) regulations for medical records. Compliance depends on your ability to maintain the privacy of the information, generally through isolation of storage systems, networks, and virtual machines.

 

To achieve this level of security, an “air gap” is often used to ensure sensitive systems are isolated. This approach works but severely limits your flexibility and ability to adapt to changing conditions. So perhaps we should consider instead a “virtual air gap.” Let’s look at how you might maintain this virtual separation of systems.

 

Storage isolation: One way to implement storage isolation is to encrypt data when it is in motion and at rest in the cloud environment. Another best practice is the striping of data across systems. This approach breaks blocks of data into multiple pieces that are spread over different disk drives that exist in different administrative zones. This helps protect you from rouge admins, who could access only a fraction of a file, rather than the whole.

 

Network isolation:Sensitive applications should be placed on a controlled VLAN. You then put mechanisms in place to monitor the configuration of routers and switches to verify that no unauthorized changes have taken place.

 

Virtual machine isolation: Virtual machines implement the “air gap” but the quality of the gap is only as good as the versions of hypervisor and the configuration. But how can cloud providers prove that they are using the expected versions on the expected hardware? Using a hardware-based root of trust to provide the evidence of hardware and software is a powerful tool for this challenge. A hardware root of trust provides a hardware-level mechanism to attest to the configuration of the hypervisors and enable the isolation and safe migration of virtual machines (to other trusted platforms).

 

Audits:Having a sound security practice is good but in reality we have to implement an audit to sample the point-in-time processes and technology. Standards such as ISO 27002 for information security and SAS 70 for maintenance of internal controls can help. Also, the Cloud Security Alliance has a solid collection of best practices for security in the cloud.

 

At a high level, these are just some of the steps you can take to implement and maintain a “virtual air gap.”

Over the past decade I have worked with many of the great technologists and companies of the x86 Virtualization era on developing new innovations in Virtualization technology. This work has been very rewarding in introducing new technologies Intel VT, Intel VT for Connectivity and Intel VT for Directed I/O, along with a host of lesser known but equally important virtualization technologies from Intel. I have seen the rapid increase in the BFBC (Bigger, Faster, Better, Cheaper) technologies to improve performance of virtualization technologies while delivering industry breakthroughs in data center and client efficiency.  During this time many of us began to research the “Next Big Thing”, now known as Cloud Computing. At the time of many of these discussions on how to enable “Cloud Computing” some of us argued that Cloud Computing was merely an extension of the Virtualization “foundation”, the next phase in Virtualization BFBC, if you will. However, as we begin to embark on the next decade in computing I believe now, we were amiss. Cloud computing is not merely an evolutionary step in the development of virtualization technologies; it is a paradigm shift in compute architecture development. Let me explain why I believe building a “virtualization foundation” was a single step in a direction, not a foundational element. More precisely, virtualization is a single step on our Computing journey, one of many that has become another secret to unearth.

 

At the core value of virtualization lays utilization of system resources. Many of the key virtualization technologies provide increased efficiency in the use of more CPU cores, more I/O bandwidth, more memory channels and reducing performance overheads of the hypervisors needed to effectively deploy virtualization technologies.  In essence, BFBC for the many-core era in which we live today. To deliver efficiency in compute, the industry led by Intel, has delivered more compute, memory and I/O capabilities with each generation of our technologies. In some cases, our competition has led the way and Intel has worked quickly to catch up…..though I certainly believe we have been leading for the last several years. However, utilization design methodologies can result in unintended consequences. Storage replication, virtual machine management and rapid application deployment can create their own series of expense for IT organizations that previously had a “firmer” grasp on these issues in the Client Server era (i.e. One Server, One O/S, and One Application). Did the industry intend to create these new concerns when we developed the initial hypervisor and VM management technologies? Not at Intel, nor most of the industry leaders whom I have the privilege to work with every day. Yet unintended consequences, lead to new opportunities, new challenges and in this case I believe a whole new era of Computing.

 

Despite investment markets “irrational exuberance” with all things Cloud there are some fundamental “truths” which the investment community has gotten correct. Cloud Computing is a VERY BIG DEAL. Cloud computing will change the way in which data centers are deployed for the next decade. In the coming years, Cloud Computing and virtualization will change the way in which clients and applications migrate from hardware platform to hardware platform. I believe Intel Architecture has a distinct advantage in our programming flexibility to scale from handheld to Data Center. However, despite Intel’s Vision, Investment and commitment to Cloud computing our work will go far beyond virtualization and utilization technologies. Our journey (an overused term in Cloud by the way) has just begun. Cloud is going to force us to reexamine ourselves as a company, and reinvent ourselves in what is increasingly, as Intel CEO Paul Otellini has made clear, a Compute Continuum. Is that enough? Is it sufficient to deliver seamless efficiency of Virtual Machines to migrate from handhelds to Data Centers? For all users around the world (over 4 billion by the end of this decade, read previous blog on predictions for the next decade) to access their User environments regardless of the manufacturer, regardless of the device, regardless of the network topology? Is that Cloud Computing? Well…it’s a start.

 

Like Virtualization, Cloud Computing’s next generation of solutions and technologies will have unintended consequences that will continue to force us to reexamine our design methodologies in silicon, software and systems management. These unintended consequences and the investments in research and development to examine their effects on the Compute Continuum will determine Cloud Computing’s future history. BFBC has been an industry trait; it has been a key driver to our growth, success and profitability….beyond Cloud Computing, beyond virtualization, beyond Moore’s Law, beyond Metcalfe’s Law, lies a new frontier in the Compute Continuum that will once again force us to examine that our tenuous “foundation” was fleeting and to move forward…one step at a time.

 

 

Let me know what you think, your thoughts, comments and opinion are always welcome.....

Filter Blog

By author:
By date:
By tag: