Skip navigation
Everywhere you look in the enterprise the cloud is more visible.  From self-service IT to data center capacity expanded through cloud on-boarding, the cloud has taken hold in our world.  What about at home, what has cloud meant for our everyday lives?  This week on the Digital Nibbles Podcast Michael Sheehan of HighTechDad discusses cloud technology & the consumer.cloud_computing_generic.png


Like many other technologies (see mobile phones) cloud has seen its most rapid innovation cycle in the consumer world.  Not too long ago cloud meant webmail & websites.  Now we backup our info, schedule our day, video chat, play video games, and sync our media from our phones, homes, and cars.  Oh yeah, and we make phone-calls on occasion too.



blogtalk_logo.pngCurious?  Tune in tomorrow Wednesday Feb. 1 @ 3 PM Pacific Standard Time to hear more from the Allyson Klein, Reuven Cohen and @HighTechDadhimself Michael Sheehan.





Michael Sheehan is based in the San Francisco Bay Area. He is an avid technologist, blogger, social media pundit husband and father.  Michael writes about technology, consumer electronics, gadgets, software, hardware, parenting "hacks," and other tips & tricks.  Professionally, Michael is the Technology Evangelist for GoGrid.  To learn more checkout his about/bio page on his blog.HTD_HeadShot.png

This post originally appeared in The Data Center Journal on September 26, 2012.

Data Center Energy: Past, Present, and Future (Part Three)


Best Practices That Introduce Proactive Power Management

This article is part three of a three-part series on energy management in the data center. (See parts one and two.)

If you read parts one and two, you recall this series about energy in the data center started with money—wasted money, to be exact. Approximately $24.7 billion is spent each year on server management, power and cooling for unused servers. This final installment circles back to money.

Read about practical how-to advice for introducing best practices that drive up energy efficiencies and therefore reduce operating costs. The resulting savings can contribute to your bottom line and strengthen the long-term business case for transformative, holistic energy-management approaches.

Best Practice: Leveraging New Energy-Management Tools

Visibility is the best antidote for wasted energy and for overprovisioning power and cooling. Most energy-management tools, however, restrict monitoring abilities to returned-air temperature at the air-conditioning units, or power draw per rack in the data center. Some vendors promote model-based approaches for energy management in lieu of real-time visibility, but the estimations that drive these models cannot achieve the accuracy required to accurately predict energy trends and identify issues before they disrupt service.

A new class of energy-management solution has emerged in response to the need for accurate visibility and analysis of energy. Holistic in nature, the new tools focus on server inlet temperatures to introduce more-fine-grained monitoring and control. Data is aggregated to give data center managers views of the thermal zones and energy-usage patterns for a row or group of racks, or the entire room.

Data center energy efficiency

Figure 1. Shows a thermal map of the “holistic” and real-time monitoring abilities

Best Practice: Fully Exploit Thermal and Energy Data

By adopting a leading energy-management solution, data center managers can automate the collection and aggregation of data from data center power and cooling equipment, servers, storage devices and power-distribution units. The analysis capabilities support drill-down into the data and extraction of insights that let the data center manager do the following:

  • Identify potential hot spots early to pinpoint causes and correct problems proactively and thereby extend the life of equipment and avoid service disruptions.
  • Log server and storage device power consumption to optimize rack provisioning, determine realistic de-rating factors for vendor power specifications and enable more-accurate capacity planning.
  • Introduce power capping per rack to avoid harmful power spikes.
  • Intelligently allocate power during emergencies and extend operation during outages.
  • Analyze trends to fine-tune power distribution, cooling systems and airflow in the data center to ultimately drive up energy efficiency.

Best Practice: Set Limits and Price Services Accordingly


With accurate monitoring and trending, IT managers can take the next step and introduce controls that enforce “green” policies for energy conservation and sustainable processes. This can include introducing alerts when power limits are exceeded to discover which users and workloads are consuming more than their fair share of power. Alerts can also be used to trigger automatic adjustments of workload.

Energy charge-backs can also encourage conservation. With power-based monitoring and logging, enterprises can attach energy costs to services and raise awareness about environmental impact and energy expenses related to individual data center workloads.

Best Practice: Balancing Power and Server Performance

Power capping for individual servers or groups of servers can directly affect server performance and quality of service. To make power capping a practical option for limiting power consumption, advanced energy-management solutions dynamically balance power and server performance. Processor operating frequencies are adjusted to tune performance and keep power below threshold levels. These approaches have been proven to reduce server power consumption by as much as 20 percent without affecting performance.

Companies like EMC currently rely on this control to meet their business targets. When EMC designed a cloud-optimized storage (COS) solution as the foundation for its Atmos service, it set targets for power and energy efficiency. The company now uses an energy-management platform that lets it control power for groups of Atmos servers.[1] The power-management architecture calibrates voltage and frequency to keep servers below the power threshold. In a proof of concept for its power-constrained regime, EMC validated that power caps and related frequency adjustments are an effective combination that does not affect user services.

Baidu, the largest search company in China, similarly applied power capping and dynamic frequency adjustments, but in this case, the goal was an increase in server density. The company’s data center hosting provider was charging Baidu by the rack. The rack power limit (10 amps, or 2.2kW at 220V) left many racks with as much as 75% unused space, and an intelligent energy-management solution allowed Baidu to increase rack loading by 40% to 60%.[2]

Best Practice: Agentless Energy Management

To avoid degradation of servers and services, an energy-management solution must be able to support agentless operation. Today’s best-in-class solutions offer full support for Web Services Description Language (WSDL) APIs and transparently coexist with other enterprise applications on physical or virtual hosts.

For compliance with data center regulations, data center managers should also look for an energy-management solution that enables secure communications with managed nodes.


Data center energy consumption represents one of the fastest (if not the fastest) growing cost of doing business today. Blindly over-provisioning and other past practices are exacerbating the problem, which can only be solved with a forward-looking, holistic approach to energy management.

The latest generation of tools and solutions continues to gain momentum, with measureable results validating the above best practices with a range of immediate and long-term cost-cutting benefits. Besides curbing runaway costs, intelligent energy management can also avoid power spikes and equipment damaging hot spots. Management will appreciate the contribution to the bottom line, and end users will appreciate the increase in service quality. And everyone will appreciate a more earth-friendly attitude in the form of responsible energy policies and practices.

[1] White Paper, PoC at EMC, “Using Intel Intelligent Power Node Manager to Minimize the Infrastructure Impact of the EMC Atmos Cloud Storage Appliances,”

[2] White Paper, PoC at Baidu, “Intelligent Power Optimization for Higher Server Density Racks,”

Leading article photo courtesy of Tom Raftery

About the Author

Jeff Klaus on data center energy efficiencyJeffrey S. Klaus is the director of Data Center Manager (DCM) at Intel Corporation.

Don’t miss parts one and two of Jeff’s series on data center energy efficiency, where he covers other aspects of reducing power consumption and costs in today’s data center facilities.

Follow IntelDCM (IntelDCM) on Twitter

opsource.jpgDownload Now

OpSource provides cloud infrastructure-as-a-service (IaaS) and managed hosting solutions that enable businesses to accelerate growth, scale operations, control costs, and reduce IT infrastructure support risks. OpSource standardizes on Dell PowerEdge* servers and Cisco Unified Computing Systems (UCS*)  based on the eight-core Intel® Xeon® processor X7560, Red Hat Enterprise Linux*, and VMware vSphere*. OpSource also deploys EMC VNX5500* unified storage platforms with storage controllers based on the Intel Xeon processor 5600 series. John Rowell, OpSource’s chief technology officer, says he looks forward to deploying the Intel Xeon processor E7 family, particularly for the benefits of Intel® Trusted Execution Technology (Intel® TXT) and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI).


“How often do you get excited about a processor?” said Rowell. “We are actually excited about the Intel® Xeon® processor E7 family because of Intel® TXT and the ability it gives us to segment and isolate VMware hypervisors. We see a lot of value in that capability, and we’re pushing the vendor community to provide broad support as soon as possible.”

For all the details, download our new OpSource business success story. As always, you can find more like this one on the Business Success Stories for IT Managers page.

*Other names and brands may be claimed as the property of others.

In the first post on this topic on server modernization, I started us off on moving an Oracle Database the Old Way "Quickly" and have gotten us to this point.


Now we will set up all of the scripts for the application and the network links. This is the tedious part - there are many individual scripts to export/import the data in the tables. Organize these into a giant script that will run them all at once. Below is a sample of the beginning of the script. Make one for each separate group of tables you are moving. (You can see how old this is from the ORACLE_HOME.)




# set -x
#  USAGE: 'user to export' 'SID to export from' 'SID to export to'
#  for example:
# nohup struct &
# Will export/import the database structure from one system to the other
#  and use a dummy label for the USER

RESH="/usr/bin/resh target-1"
echo $DATE









${RESH}  "
    export ORACLE_SID=ORCL
    export TWO_TASK
    $EXP userid=${SRC_USER} file=${EXP_PIPE} log=${EXPLOG}  parfile=${EXPFILE} & " &



${RESH} "cat ${EXP_PIPE}" > ${IMP_PIPE} &

# run the import
$IMP userid=${DST_USER} file=${IMP_PIPE} log=${IMPLOG} parfile=${IMPFILE} &
exit 0



Be sure to set up a high speed link between the legacy server and the new server. This link will allow for remote execution of the scripts.



Once the database has been created, you need to create all the objects in the database for the application.  Similar to allocating space for tablespaces on disks, Oracle has to allocate space for the objects like tables while the rows are being loaded. This space allocation takes time, so the better approach is to pre-allocate the space prior to the actual movement of the data which means that the application is shut down.


In this step you are also creating the VIEWS, PROCEDURES, PACKAGES, TRIGGERS, SYNONYMS, etc. of the application in the database. (You’ll have to disable the TRIGGERS before the data loads.)


Now that you have the export/import scripts set up, invoke them in one super script. You probably won’t be able to do the entire database in parallel. During the rehearsal, you’ll find where to put in the ‘wait’ statements.


Once all the ROWS have been imported you can invoke the set of scripts to create the INDEX's. When these are finished, run the script to ENABLE the CONSTRAINTS which are actually generating INDEX's. The next major step is to ENABLE the TRIGGERS. This is also done by script.


The audit of the database is among the last steps. Match the output of DBA_OBJECTS in both databases. You also need to get the number of ROWs in the legacy database and compare that to the number of ROWs imported into the target database. This is where the team comes in handy; each member is doing a different audit task.


Now open the application to use by the testers. Because you don’t want to add any test data to the production database this has to be done carefully.


If everything passes muster, then the target database can be opened for production. For the paranoid, the legacy database can be opened for parallel production in the event that the firm wants to move back to the old system.


Finally, document the entire effort so that the configuration is clear for others to understand. And go get some rest.


This process doesn’t take advantage of Oracle Streams or Oracle’s GoldenGate. But I believe there is a use for this methodology even today. What do you think? Do you agree or disagree?  Do you have migrations that will require this older approach because no other will work? Do you have any suggestions to improve this process? What have you seen when doing this with Datapump? Do you recommend any changes to this methodology when using 10.2 or 11.2 as the target database?

Got integration headaches? Try this cloud-bridging solution.


When one company acquires another, an IT organization is left with the unenviable job of quickly integrating two existing IT infrastructures. This is an activity that is fraught with security risks, technical challenges, and—most of all—management headaches.


Intel and Citrix have developed a solution that helps your organization take the pain out of integration challenges of this type. With this cloud on-boarding solution, you can move front-end application servers to the cloud while keeping sensitive back-end database servers secure in your data center.

I don’t know about you, but while this is easy to say, it is much more powerful to see it work. You can demo of the cloud resource sharing and on-boarding solution on YouTube. This animated presentation demonstrates how to configure a cloud on-boarding solution on servers equipped with Intel® Xeon® processors running Citrix XenServer® and VMware® vSphere™.


I won’t walk you through the configuration details here. The quick version of the story is that the lab spanned two locations connected via a wide area network—a simulated data center in California and a Citrix XenServer cloud in Oregon. This solution relied on Citrix® NetScaler®, with the Citix® CloudBridge® feature enabled, to securely bridge the data center and cloud networks.


While I can’t promise you that this cloud-bridging solution will alleviate all the challenges and headaches that come with a major integration initiative, it ought to make things easier on you.


For a firsthand look at the solution, check out Hybrid Cloud Computing with Intel and Citrix® NetScaler® on YouTube. Or download the reference architecture for more details.



Follow @IntelITS for more on Intel® Cloud Builders, cloud computing, and more!

Let’s say you’re moving an Oracle database from a legacy RISC server to Linux on Intel architecture. You’ve been asked to improve the performance of the application along with the migration. That’s easy, the move itself will likely accelerate the performance (pdf)  of the application by hosting the application on a faster platform. But you know you can get more from the database by taking advantage of the migration to reorganize its structure. There is also a limited outage window to execute the migration. To re-organize the database you’ve chosen to use the Oracle tools export and import, i.e. DataPump if moving from a newer Oracle database. These tools, you know, will give you the best opportunity to reorganize the database.


So, how are you going to do this on the target server? Let’s go through the steps:


  1. Configure the hardware, memory, BIOS, operating system (sysctl.conf?), storage (HBA Driver anyone?), and network configuration.
  2. Install Oracle Database Enterprise Edition and create a sample database.
  3. Document the tablespaces and data file sizes of the production database (query the data dictionary).
  4. Start the tablespace creation process on the new database to replicate the tablespaces of the production database. (See below for a speed up idea.)
  5. While these are running:
    1. Export the source database from a quiesed Q/A database. (ROWS=No)
    2. Create an INDEX create script by Importing the export file you just made with the parameter INDEXFILE=’filename’ (FULL=Y)
    3. Edit the ‘filename’ file to have each INDEX run in Parallel Query mode
    4. Generate a file of the constraints for each user
      • Edit this file and make a copy, one to disable the constraints, the other to enable the constraints
  6. Configure a private network (the fastest available) between the source server and the target server. Allow for remote procedure calls.
  7. Create a series of scripts to export the source database table by table, starting with the largest table first.  Bundle the smaller tables into separate import files. Create import scripts that import the same tables as the export scripts. (ROWS=Y, INDEXES=N, CONSTRAINTS=N.) You’re just moving the data.
  8. Put these all into a shell script where they are called all at the same time. Be sure to have the output from each script sent to a file.
  9. Run an import to pre-create the various OBJECTS or structures of the application in the database but without ROWs, INDEXes, or CONSTRAINTS.
  10. Start the data migration.
    1. Shut down the application and put the legacy database into single user mode
    2. Disable the TRIGGERS in the target database
    3. Fire off the script to export the data from the legacy database to a named pipe and import into the target from the same named pipe
    4. Once the row data has been migrated, start the script to create the INDEXes in the target database
    5. Run the script to ENABLE the CONSTRAINTS in the target database
    6. Run the script to ENABLE the TRIGGERS
  11. Audit the database to ascertain that all OBJECTS in the Legacy database are in the target database and that each TABLE has the same number of ROWS in both databases.
  12. Open the new database to production (or mirror this with the legacy database).
  13. Disable the private network used for the migration of the data.



First create a database with just the minimal tablespaces (SYSTEM, ROLLBACK, USER, SYSAUX, etc.) but make each tablespace is a size optimal for the application. Create the application tablespaces for the application laid out on storage (the way you envisioned initially). To get the list of the application specific tablespaces use the following script:


Col tablespace_name for a45 SIZE_IN_BYTES 999,999,999,999
Spool tbs_size.out
Spool off



For these custom tablespaces there is a trick to make them in parallel. While you can’t ADD a tablespace to the database in parallel to another tablespace, you can add data files to the tablespaces in parallel. For example, make the initial data file for the tablespace at the size 100MB. Then do an ADD DATAFILE for each tablespace at a respectable size. You can execute as many of the DDL commands ADD DATAFILE in parallel as long as your server and storage can handle it. (This activity will also give you a good opportunity to measure the maximum sustained I/O to the storage.)


When the data files are being added is a good time to generate the INDEX creation DDL. To get the CREATE INDEX text use import with the INDEXFILE option. Edit the DDL to put the INDEX's in the tablespaces you want them to be in with the EXTENT sizes that are optimal. Run this script to create empty tables. Now you have completed the space allocation for the tables and eliminated this time consuming process from the migration schedule.


In part two I will continue the steps to get this move complete, including a snippit of a Bash shell script to run the export/import processes in parallel.

(I'm just giving you one thread in the sample, you'll have to duplicate the line edited for your particular circumstances.)

cloud questions

(CC) Flickr: Micky Aldridge


Cloud is an ever-growing complex topic.  It has sprouted into every aspect of our digital world in the home, car, on the go, and of course in the enterprise.  Even now, the technologies are still maturing and we have issues likeefficiency to consider let alone the geo-political concerns that can arise in a globalcloud strategy.


This week on Digital Nibbles, a new podcast series on the current events & issues that face the world of cloud and data center, we have Intel’s Winston Saunders and Jackson He to talk about those two subjects.


Winston Saunders, Intel’s Director of Data Center Power Initiatives and regular blogger on data center efficiency, will discuss the challenges and benefits that the cloud presents for efficiency.  And Jackson He, Intel’s General Manager for Software & Services in China, will discuss the challenges and issues that face IT pros in China with enabling cloud technologies.


Turn-on, tune-in, geek-out LIVE Wednesday Jan. 18 at 3:00 PM Pacific time with hosts Allyson Klein and Reuven Cohen at the Digital Nibbles Website.


Feel free to comment with questions here or share with @DigitalNibbles on Twitter now through the show!

I published a blog late last year on an idea bringing insight from the Green500 and Top500 together in a way that helps to better visualize the changing landscape of supercomputing leadership in the context of efficient performance. Since then I have started to refer to that analysis by the shorthand term “Exascalar.”


Recall, Exascalar is a logarithmic scale for supercomputing which looks at performance and efficiency mormalized to Intel’s Exascale Goal of delivering one Exaflops in a power envelope of 20 MegaWatts.


Of the emails I received on the topic and one of the most interesting was from Barry Rountree at LLNL. Barry has done a similar analysis looking at the time evolution of the Green500 data. So I thought, “why the heck not for Exascalar?”


And then I had some fun.


Building from Barry’s idea, I plotted the data for the Green500 and Top500 from November 2007 to November 2011 in one year increments (with the addition of the June 2011 data for resolution) as an animated .gif file shown below. The dark grey line is the trend of the "Exascalar-median." To highlight innovation, in each successive graph the new points are shown in red while older systems are in blue. The unconventional looking grid lines are constant power and exascalar lines.




Please click image to animate



There’s a lot going on here. One notices some obvious “power pushes” where occasionally a system pushes right up against 20 MW line to achieve very high performance. Invariably these achievements are eclipsed by systems with higher efficiency.


Another thing that’s striking is the huge range off efficiencies between systems; over a factor of one hundred for some contemporary systems with similar performance. That’s pretty astounding when you think about it - a factor of one hundred in energy cost for the same work output.


But the macroscopic picture revealed, of course, is that the overall (and inevitable) trend shows the scaling of performance with efficiency.


So how is the trend to Exascale going? Well one way to understand that is to plot the data as a time series. The graph below shows the Exascalar ranking of Top, the Top 10, and the Median systems over time. Superimposed is the extrapolation of a linear fit which shows why such a huge breakthrough in efficiency is needed to meet the Exascale goal by 2018.



Exa Trand.gif


It’s remarkable that Top10 and Top Exascalar trends have essentially the same slopes (differing by about 7%) , whereas the slope of the Median trend is about 20% lower.


But these simplified trends belie more complexity “under the covers.”  To look at this I plotted the Top 10 Exascalar points from 2007 and 2011 and then superimposed trendlines from the data of intervening years. Whereas the trend line of the “Top” system has really trended mostly up in power while zigging and zagging in efficiency, the trend of the “Top10” (computed as an average) is initially mostly dependent on power, but then bends to follow an efficiency trend. Note that the data points are plotted with a finite opacity to give an sense of "density." (Can you tell I'm a fan of "ET"?)


Exascalar Trend Analysis Mathematica.gif


This is another manifestation  of the “inflection point” I wrote about in my last blog, where more innovation in efficiency will drive higher performance as time goes forward, whether in emerging Sandybridge, MIC, or other systems which have focused on high efficiency to achieve high performance. This anlaysis highlights what I think is the biggest trend in supercomputing, efficiency, while capturing the important and desired outcome, which is high performance. As my colleague and friend here at Intel, John Hengeveld writes: “Work on Efficiency is really work on efficient performance.”


What are your thoughts? Weather-report or an analysis that provides some insight?


Feel free to comment or contact me on @WinstonOnEnergy

California—because of its size, topography, and cultural diversity—sometimes seems like two states instead of one. Defined by simple geography, with the dividing line somewhere just south of San Francisco (please no hate mail on this; I had to draw the line someplace), Californians consider themselves from either Northern California (NoCal) or Southern California (SoCal).


As you might expect, each area has one large city that tends to embody the culture of the larger region. NoCal is generally represented by our beautiful City by the Bay, San Francisco. Similarly, the large and vibrant metropolis of Los Angeles is the first one that comes to mind for SoCal.


Primarily because my sons live there, I’m partial to the area of LA viewed as being “outside the 405.” The 405 is the San Diego Freeway, which separates beach communities like Santa Monica, Venice, and Marina del Rey from the larger LA metro area. As you might imagine—due to heavy traffic and a freeway network never designed for the level of use it gets—going anywhere on the 405 by car can be a bit daunting.


I use the 405 to establish context in my latest Data Center Knowledge column, where I discuss our seventh fundamental truth of cloud computing: Bandwidth and data transmission may not always be as inexpensive and unencumbered as they are today.


In the column I suggest that broadband spectrum is a shared resource on which both consumers and businesses depend—with very different expectations. I elaborate on Telco 2.0 efforts and include a graph showing that switched voice growth, the historic cash cow of telecommunications companies, is flat. Conversely, growth in the bandwidth requirements of mobile handheld devices, mobile PCs, and tablets is exploding.


Telco 2.0 activities suggest, that using the classic business model, which generates revenue as a factor of time vs. resources used, is likely not sustainable if we expect increased infrastructure investment. (As an aside, I recently read an interesting Cellular-News article on mobile data capacity sent by a Telco colleague in Europe, thank you Petar — that speaks to this as a matter of consumption trends.)


I hope you find this column interesting, since I believe bandwidth considerations are one of the most challenging—and ignored—elements of a viable cloud ecosystem (beyond government policy, of course). I welcome your feedback, so please join the discussion. You’re welcome to contact me via Twitter.

It’s clear that the two biggest buzzwords of 2012 are   “Hadoop” and “Big Data.” (Maybe even buzzier than “cloud”)  I keep hearing that they are taking over the world, and that relational databases are so, “yesterday.”


I have no disagreement with the explosion of unstructured data.  I also think that open source capability like Hadoop is allowing for unprecedented levels of innovation.  I just think it’s quite a leap to decide that all of the various existing analytic tools are dead, and that no one is doing predictive or real time analytics today.  Likewise, the death of scale up configurations is widely exaggerated.


First, I’d like to offer that the notion that real time frequently involves utilizing in-memory analytics for response time. Today, and for the near term, that often implies larger SMP type configurations such as those employed by SAP HANA.  Many think that the evolution to next generation NVRAM server memory technology will redefine server configurations as we know them today, particularly if they succeed competing with DRAM and enabling much larger memory configurations at much lower price points. . This could revolutionize how servers that handle data are configured both from the amount of memory and its non-volatility.


Second, precisely because Hadoop is Open Source, it makes sense that existing analytics suppliers are moving to incorporate many of its key features into existing products, such as Greenplum has done with MapReduce. Further, key players like Oracle are now offering Big Data appliances embracing both Hadoop and NoSQL, separately or in conjunction with other offerings.  Even IBM, with arguably the best analytics portfolio in existence today, offers InfoSphere BigInsights Enterprise Edition, which delivers Hadoop (with HDFS and MapReduce) integrated with popular offerings such as Netezza and DB2.  Predictive analytics also exist from companies like SAS in addition to Bayesian modeling from specialty providers.


Now, on one level this is capitalism at its best, allowing for the integration of open source with existing (for a price) products, while taking full advantage of open innovation.  On another level, it is an acknowledgement that unless you are a pure internet company (a la Facebook, Google, YouTube), you have a variety of data that might also originate in the physical world, and occasionally would like to connect to actual transactional data.  It also acknowledges that predictive analytics exist today, and the capability can be adapted and applied to unstructured data, while adding installation, management, security and support, and connecting to other warehouses and databases.


I think it’s extremely premature to categorize the Hadoop world as separate from everything that has gone before it.  It is naïve to believe that expertise previously developed has no value and that existing suppliers won’t evolve to integrate and combine the best of breed solutions incorporating all data types. Likewise configurations will continue to evolve with new capabilities.


I am personally thrilled to see the excitement and energy that is driving new data types, real time and predictive analytics, and automation.  I can only hope that credit card fraud detection becomes much more sophisticated!



Have something to say or share?  Feel free to find me on Twitter @panist or comment below.

Data Center Energy: Past, Present and Future (Part Two)

Nothing Is Certain but Death and Taxes—Even in the Data Center

This article is the part two of a three-part series on energy management in the data center. (See parts one and three.)

Last week, I wrote about the past approaches to data center power management and the state of rising inefficiency. I presented a case for more accurately assessing current power consumption and explained why past approaches for calculating power requirements or manually measuring power were insufficient for establishing proactive energy-management policies.

In this article, I will overview some of the current trends that make it imperative that data center teams consider a more holistic approach for monitoring and controlling temperature, airflow and power in the data center. These trends have been widely reported in the industry, or observed first-hand in data centers around the world that are owned and operated by our customers and our partners’ customers.

The good news is that there are many trends and innovations that make it possible to drive up conservation even within the largest sites and facilities. Before we talk about next steps, consider the trends in this article and ask yourself at least some of the questions raised in each section.

Trend: Virtualization and Consolidation

Originally virtualization initiatives delivered on their promises of reducing operating costs, improving service delivery and contributing to a more scalable data center model; but as a power-management mechanism, VMs proved to be a crude tool that was soon overtaken through increased VM load (as VMs became a standard service). But the combination of application consolidation and device consolidation/minimization has introduced larger numbers of blade servers and much higher utilization rates for those blades. As a result, power consumption per rack has increased significantly.

The higher-density racks are problematic in two ways. First, they can result in a data center exceeding the capacity of a site that was not designed for a virtualized environment. Second, they introduce hot spots in the data center that can lead to equipment failures unless air-conditioning and airflow adjustments are made. These can, in turn, increase costs.

What is the rack density in your data center? Are you maximizing the return on investment in blade servers, or are you buying more racks than you need to accommodate power requirements? Many legacy centers today, especially in countries with older power-delivery infrastructures, operate with underutilized racks and inefficient power distribution in the data center as they try to take advantage of virtualization.

Trend: Power Availability Issues

With data center compute densities on the rise, building a new facility calls for a survey of utility companies and their power-delivery limitations, rate scales and failure rates. Larger companies cannot necessarily build in the location of their choice if the local power company is unprepared for the additional energy requirements.

Besides the impact on site selection, existing facilities around the world are approaching power ceilings as utility companies struggle to keep up with demand. For example, in Japan only 2 out of the country’s 54 nuclear power plants are online following the March 2011 Tsunami. So far this summer, conservation efforts have worked and there have been no planned blackouts, but the possibility still looms. And other countries, it is typical that regions are consuming 90% of available energy on a consistent and rising basis. In Japan, a micro-industry has emerged, with companies that evaluate and report on real-time consumption of electricity by city and region.

Whether it is an infrastructure issue that impedes delivery, a production capacity limitation, or a natural disaster that takes a major supplier offline, the impact on local businesses can result in severe restrictions on their growth and ability to operate data centers to their fullest.

Does your company have a plan for operating at severely restricted levels of power, if necessary, as a result of a natural disaster? Do you have an energy-management solution in place that can help you introduce controls and lower consumption in an orderly fashion that aligns with your business priorities?

Trend: Power “Quality of Service” Becoming Mission Critical

As businesses and markets globalize and day-to-day business becomes Internet-centric, even a small power outage can be damaging to a company’s bottom line. Highly dense data centers and today’s much faster compute platforms are also able to generate surges of demand that can themselves result in damage to the data center equipment.

Being able to monitor power and proactively make adjustments in the event of a power failure have become cost-effective practices that prolong the life of equipment while they also avoid disruptions to business. Is your data center monitored? Do you understand the power and thermal patterns that affect reliability, service continuity and cost of service delivery at your site?

Trend: Energy Regulations and Taxes Add to Data Center OPEX

In the U.S., the Environmental Protection Agency (EPA) is expected to introduce new Energy Star standards for the data center in the near future.

These regulatory practices and the related fines and taxes are also being adopted in other countries. In Japan, for example, the Ministry of Economy, Trade, and Industry (METI) operates its “Top Runner” program, offering guidance similar to that of the U.S. EPA’s Energy Star program, for energy-efficiency equipment selection support. METI has been working to develop data center efficiency metrics and to harmonize them with those of the U.S. DoE, U.S. EPA, U.S. CoC and The Green Grid (a global organization focusing on data center efficiency improvement). METI and the Ministry of Internal Affairs (MIC) each publish recommendations for data centers, and they plan to consolidate these recommendations into a single guideline in the next few years.

Taxing energy is being debated in various countries.[1] Japan previously taxed energy, but the tax has been since repealed. In the U.S., “cap and trade” programs are being introduced in some states.

California is receiving a lot of attention currently, as it prepares to launch the nation’s first cap-and-trade program aimed at reducing emissions.[2] This summer, the state is starting a trial run of an online auction site where 150 major “emitters” can bid on carbon allowances. Starting this fall, California will distribute annual allowances to industrial entities and factories; in 2015, the program will also apply to fuel distributors.

Foreseeably, companies will have to actively monitor and restrict power use to stay under government-mandated “for-free” levels of consumption and avoid taxes or extra purchases at higher rates. What are your company’s peak consumption rates and how much could you cut back if budgets forced conservation?


This look at power restrictions, environmental variables and taxes would be depressing if there were no remedies in sight. But power-management solutions are evolving and can put you back in control with a broad range of effective, proactive energy-monitoring and on-the-fly adjustment capabilities, some of which were hinted at in this article.

[1] Global taxes on carbon emissions/energy consumption:

[2] San Jose Mercury News, August 30, 2012, “California’s cap-and-trade program to cut emissions starts trial run” by Dana Hull (,

About the Author

Jeff Klaus on data center energy efficiency

Jeffrey S. Klaus is the director of Data Center Manager (DCM) at Intel Corporation.

Jeff concludes this series with suggestions for forward-looking data center best practices. Look for Part Three to learn how you can keep your company on a path that allows you to adjust for the real-world future of energy as it relates to—or restricts—the delivery of business infrastructures and services.

Leading article photo courtesy of 401(K) 2012

This year marked huge accomplishments for the progression of Cloud computing. From introducing Intel’s Cloud Vision 2015 to celebrating Open Data Center Alliance’s one year anniversary, it was a huge year for Cloud. As we approach a new year, let’s reminisce on all of the biggest and most popular topics that surrounded Cloud computing in 2011.


We Read:


Usage Models and Technology by Billy Cox


Facebook + Open Data Center Alliance: What’s not to love (Or should I say like)? By Raejeanne Skillern


Cloud Computing Strategy: It’s all for Naught if the Dogs Won’t Eat by Bob Deutsche


Hybrid cloud – Are you Ready? By Billy Cox


Intel IT’s cloud road map challenges and solutions ahead by Ajay Chandramouly


Is Virtualization the “foundation” to cloud computing? By Jake Smith


Fundamental Truths of Enterprise Cloud Strategy by Bob Deutsche


How to effectively manage identity in a cloud environment by Bruno Domingues


We Watched:


In Intel’s Cloud Vision 2015, Intel and its partners explain how they are working to develop cloud computing infrastructure and architecture to meet increasing requirements of cloud. With Cloud vision 2015, Intel describes plans to take the data center to the next level and delivering secure, energy efficient hardware and services to move the cloud for the next four years.


Deciding to move to the cloud can be a difficult process. Save hours reading the documentation and learn if configuring a private cloud is the right thing to do to manage your resources by watching Intel Cloud Builders Reference Architecture: How to Configure the Cloud. This video features a thematic animation with technical tips and live configuration samples.


We Listened:


In our chip chat on Open Data Center Usage Models with Raejeanne Skillern, we talked about how the developments of cloud computing and data center technology advancements make open standards a necessity.


We also heard from Intel’s Alan Priestley, Cloud Director for EMEA, as he discussed cloud security and the importance of building trust from Client to Cloud.


As we start the new year, we look forward to what’s to come for Cloud computing!  What are your predictions? Let us know what you think the future holds for cloud computing, and comment below!



For the latest on cloud computing news and technology follow @IntelITS

As technology progresses, energy and data center efficiency become a very important consideration. Not only is data center efficiency a crucial factor to a sustainable planet, but it can also save you from increasing costs of energy usage. This year provided some great insight into the importance of data center efficiency. Let’s take a moment to look back on some of our content surrounding this topic.


We Read


Turning the Tide: Thee Simple Imperatives for the Energy Efficient Data Center by Winston Saunders


Energy Efficient Technology beyond the Data Center: Energy Proportionality of Xeon travels to Prague by Winston Saunders


The Difference Between Energy and Power: Why It Matters to the CIO by Winston Saunders


Power and Energy Efficiency: Double Your Benefit by Winston Saunders


The Efficient Data Center: Why I’m attending the Green Grid Tech Forum by Winston Saunders


Top Green IT: Making it Happen by Winston Saunders


The Elephant in your Data Center: Inefficient Servers by Winston Saunders


We Watched:


During IDF 2011, we had a technical session where we reviewed the challenges affecting data center efficiency and the solutions available now to address these issues.


We also learned how instrumentation delivered on Intel Xeon processors can improve data center efficiency. Similarly, Intel Power Node Manager can also optimize and manage power and cooling resources in the data center.


We listened:


To Intel® Chip Chat - Inside Facebook's New Data Center where Facebook's Lead of Hardware Design Amir Michael and Intel's Director of Cloud Marketing on the impressive innovation in data center efficiency resulting from 18 months of collaboration.


Also on Intel® Chip Chat we learned about SUE: A New Way to Measure Datacenter Efficiency where Mike Patterson and Winston Saunders, who both work on datacenter efficiency at Intel, talk about developing a new measurement to go hand-in-hand with PUE (Power Usage Effectiveness) called SUE (Server Usage Effectiveness).



As each day passes, Green IT becomes a more important topic in the technology industry. Moving into 2012, what are some of your thoughts on the future of efficiency? Do you think sustainability will play a role in new innovations? How important is energy efficiency to you? Let us know what you think in your comments below.



Remember to follow @IntelITS for the latest on green technology and IT for data center efficiency!

Filter Blog

By date: By tag: