Although energy costs are the fastest-rising cost element in the data center, the power battle hasn’t been lost. There are still many opportunities to improve efficiency. These include cooling optimization using hot and cold aisles, increasing rack density, turning on/off machines on demand, and balancing load in the data center to optimize cooling and reduce power consumption.

 

All these opportunities can potentially be achieved with Intel® Intelligent Power Node Manager, a technology embedded into Intel chips in a select group of servers. Some of most common scenarios where Intel Intelligent Power Node Manager can be applied beyond monitoring include:

 

  • Increasing compute density—enforcing power limits based on power reported and populating racks with more servers using the previously stranded power capacity in the rack
  • Linking cooling to actual demand—coordinating Intel Intelligent Power Node Manager power and thermal data with data center cooling controls to help ensure that adequate, but not excessive, cooling is provided, minimizing cooling costs
  • Dynamically balancing resources—using migrations tools to move workloads to racks with available power headroom, and using Intel Intelligent Power Node Manager’s power capping to help ensure the rack budget is not exceeded

In addition to these optimization scenarios, Intel Intelligent Power Node Manager can be applied to increase availability, applying power capping in case of power outage and reducing the overall consumption with some performance penalty.

With the launch of the Intel® Xeon® E5 processor, the second generation of Intel Intelligent Power Node Manager (aka. 2.0) has been released. It is designed to improve monitoring and control granularity and to allow implementation of a range of usage models, as depicted here:

PowerMaturityModel.PNG

These scenarios go from simple real-time power monitoring to integrated data center power management practices. Expected higher payoffs for power management require higher investment and process maturity to deploy.

 

You don’t necessarily have to step up to the top, or even one of the more advanced usage models. Some situations could be enough for usage model No. 1, Real Time Server Power Monitoring. There may be no reason to invest beyond this point.

 

In usage model No. 2 (Power Guard Rail) and No. 3 (Static Power Capping), Intel Intelligent Power Node Manager allows you to pack servers more densely in a rack by imposing a guaranteed power limit.

 

Consider this scenario: In a traditional method, we usually take the specification of the power supply rating from the server manufacture, e.g. 650W, and test in a lab the real power consumption using a power meter. We then discover that 400W is reasonable to be used. In a typical 4KW power envelope, we usually populate the rack with 10 servers (i.e. 4.000W/400W = 10). Using Intel Intelligent Power Node Manager in the same server, measurements indicate that for a defined workload, the power consumption rarely exceeds 250W. Using that as an aggressive power/server budget, and enforcing 4KW for a global cap, i.e. the entire rack, only on rare situations could the consumption exceed the 4KW envelope, and will not exceed that amount due to Intel Intelligent Power Node Manager policy. In this scenario, we can then populate a rack with 16 servers instead of 10 (i.e. 4.000W/250W = 16), for an increase of 60 percent in rack density.

 

PowerCapping.PNG

 

The Static Power Capping usage model employs more aggressive capping.  There will be some performance impact during peak, but this should be OK as long as the service level agreement (SLA) is met.  The effect is to increase infrastructure utilization.

 

Usage model No. 4 (Dynamic Power Capping) implements continuous capping for additional power savings.  The capping level is determined by the application performance monitor driving a power management policy.  This scheme may not be practical if the performance monitoring facility is not available.

 

For instance, in virtualized environments, where hosts run a variety of applications, it is difficult to isolate a meaningful indicator representing the application mix. For want of a better indicator, monitoring CPU utilization has been surprisingly useful in some settings. The idea is to impose a cap on a server based on the current CPU utilization in that server. The actual capping level, in watts, is derived heuristically from offline experiments with representative workload mixes, yielding energy savings of 10 to 15 percent over a daily workload cycle.

 

In usage model No. 5 (Hybrid Usages), the practical capping range is limited to about 30 percent of peak power in light configurations. If the goal is energy saving, non-operating states, such as hibernation, must be added to Intel Intelligent Power Node Manager policies.  This is possible in virtualized cloud environments that allow dynamic consolidation of workloads into a pool of active machines and the shutting down of unused machines.

 

What’s new with Intel Intelligent Power Node Manager 2.0


The following table compares the features in each version of Intel Intelligent Power Node Manager.

 

NMTableFeatureCompare.png

To put these capabilities in practice in your data center environment, you can adopt one of the solutions described under Policy-Based Power Management in the Intel® Cloud Builder library.

 

I would love to hear from you if you have a specific usage case that is not covered.

 

Best Regards!

"Cloud" has been an IT buzzword for small business for a while now. According to this 2010 Spiceworks study on SMB Cloud Adoption, 24 percent of SMB IT professionals are already using the cloud—or planning a move to it. But there's no one-size-fits-all usage model for cloud services.

 

So how do you determine whether the cloud is right for your business? A recent article in the New York Times on cloud migration outlined some considerations for small businesses considering the cloud, including:

 

Will your cloud-based applications work the way you want them to?

Online versions of software programs may offer fewer features or less flexibility. However, they may also be easier to use since configuration is minimal.

 

How much will it cost in the long run?

The pay-as-you-go model of cloud services is appealing, especially for small businesses with cash flow concerns. However, you may end up paying more over time by using cloud-based applications compared to a one-time software purchase expense. Though as any IT-savvy business owner knows, software also needs to be upgraded over time. By using a cloud-based application, you'll always have the latest and greatest tools.

 

How good is your Internet connection?

Cloud-based apps are great if you and your employees need access to tools and data from different computers and devices. However, if you do not always have fast, reliable Internet access or if you frequently transfer large files, the cloud becomes less convenient. And who knew that Internet speeds varied so widely by region? If you live in the Ocean State, you're in luck...if you're in the Gem State, not so much.

 

How safe is your data? And how much of it do you have?

Many small businesses who have embraced web-based applications like email and calendaring aren't comfortable moving all of their data storage to the cloud. And it may be cost-prohibitive to do so, depending on how much data you have.

 

I recently talked to people at a small business in Portland, Oregon who had moved a good chunk of their IT tools to the cloud—but also recently invested in an on-premise server. They loved the accessibility of cloud-hosted applications like QuickBooks and Microsoft Office, yet found that it was much faster (and much less expensive) to store their data onsite. Take a look at the attached case study for more of their thoughts—and, of course, talk to your local Intel® Technology Provider about the best solution for your IT needs!

Download Now

 

Arsys.jpgArsys is a leading Spanish provider of hosted IT services with over 250,000 small and medium-sized customers across 100 countries and a core of enterprise-size customers. Its services range from hosting websites to cloud computing, managed hosting, and IT infrastructure solutions. This also includes application delivery, back-up, and systems management. The company wanted to significantly expand its cloud-based services, specifically its infrastructure-as-a-service (IaaS) offering, so its customers could benefit from lower IT costs, greater flexibility, and always-available services. It turned to a platform consisting of IBM System x3850 X5* servers  powered by Intel® Xeon® processors 7500 series and the Intel® Xeon® processor E7 family.


“The development of this cloud computing platform has generated important benefits from the business perspective… [and] has allowed us to reduce power consumption by 20 percent and significantly reduce our operational costs,” said Olof Sandstrom, chief operations officer for Arsys.


For all the details, download our new Arsys business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 

*Other names and brands may be claimed as the property of others.

PUE has been a hugely successful efficiency metric in quantifying the discussion of data center infrastructure efficiency. Of course, infrastructure is not the only thing in a data center, and we have proposed “SUE” as “Part B” of the data center efficiency equation to address the important aspect of compute efficiency. SUE is a similarly derived IT performance metric which is gaining traction in application.

 

Though neither metric is “perfect,” both have a low barrier for adoption and are meaningful in a big picture perspective (so long as you don't get too hung up on tricking the metric at the expense of other important parameters). Another powerful aspect driving acceptance of PUE and SUE is they fit easily into grammatical sentences. If your PUE is 2.0 you’re using twice the energy you need to support your current IT infrastructure. If your SUE is 2.0 you’re operating twice the number of servers you need to support you current IT workload. Both convey obvious business impact.

 

So what about the “holy grail,” data center work-efficiency?

 

There’s broad industry recognition (as Ian Bitterlin says, “it is the 1.0 that is consuming 70% of the power”), and a lot of work is going on to understand it. For instance, The Green Grid published the DCeE or a Data Center Efficiency Metric back in 2010, based on a view toward quantifying “useful” work output of the data center.

 

However, this sophisticated approach really has to do with application-level details and has not yet gained wide industry traction. This is partly, I believe, because of its complexity; the barrier to entry is an investment in highly granular data analysis which is more than many operators need or will support.

 

So I asked myself, “what are the alternatives?” Can we lower the barrier to entry in the way PUE and SUE have done for infrastructure and IT efficiency and define a Data Center Capital Usage Effectiveness (DCUE) taken as the ratio of two quantities with units of “Work/Energy?”

 

Well, the short answer is, we can. The starting point is the very simple idea that:

 

Work/Energy = Integrated (Server Performance * Utilization)/(Total Data Center Energy)

 

The big assumptions are: 1) it assumes statistical independence of server performance and utilization, 2) it tacitly assumes CPU performance and utilization drive work output (though this simplifying assumption can be removed with more complexity), and 3) it neglects things like network and storage efficiency (which are minority energy consumers in most data centers). Not perfect, but tractable.

 

The DCUE formula has the advantage of providing an easy entrée into the analysis of the work efficiency of the data center; it focuses on what many consider the big three: Infrastructure efficiency, IT equipment efficiency, and it quantifies how effectively the capital asset is being utilized (thanks to Jon Koomey for pointing that out to me).

 

Roughly, here is how the numbers work (these are made up data but are representative based on experience): Imagine a typical data center with a PUE of 2.0. If the data center is on a refresh cycle of six years its SUE will be about 2.4, and the server utilization might be about 20% in an enterprise with a low level of virtualization.

 

An efficient data center might have a PUE closer to 1.3, a more aggressive three year server refresh rate with an SUE of about 1.6, and might increase utilization to 50% with both higher rates of virtualization and perhaps utilize technology like “Cloud Bursting” to handle demand-peaks.

 

The math reveals a Data Center Capital Usage Effectiveness (DCUE) opportunity of about 6 times between the two scenarios.

 

Data Center

PUE

SUE

Utilization

DCUE

“Typical”

2.0

2.4

20%

24

“Efficient”

1.3

1.6

50%

4

 

In fact, a“Cloud” DCUE could even be higher with more aggressive server refresh, lower PUE, and higher utilization levels, whereas typical enterprise utilizations might be lower.

 

My friend Mike Patterson here at Intel is always challenging, “so… what does it mean?” Well just as PUE and SUE represent “excess” quantities, a DCUE of 24 means you are using about 24 X the energy (and hence data center capital) as you'd need at optimum efficiency. That means 24 times the data center capital. A pretty powerful argument to improve.

 

So there you have it, "The Big Three" for data center capital efficiency: 1. How efficient if your infrastructure? 2. How effective is your server compute capability? and 3. what is the utilization of your capital assets?

 

In subsequent blogs, I’ll talk more about these ideas and some of the issues we still need to think about. But until then, I'm curious what you think. Right track? Wrong track? Why?


 

 

dn_full_logo_web.png


 

Sorry we missed you guys last week.  But we're back this week to make up for the difference!.  And Marlin is back again to talk security.  Read on for more!

 

 

From Network Ops. to CIO what keeps you awake?  For most, it’s not whether or not they left the light on last night.  It’s the security of their system.  Moving up the stack from the every-day user to the data center that powers the company cloud, information security and governance are not just the tip of the iceberg, they are the iceberg.

 

Want the opportunity to chat with someone that deals with these concerns every day?  This week we have Marlin Pohlman to discuss these items and more.  Marlin’s world revolves around these topics so much so that he has earned the title Chief Governance Officer at EMC and plays a role in many of cloud security organizations.

 

 

 

Read more on Marlin:

 

Marlin_Pohlman.pngMarlin Pohlman is Chief Governance Officer at EMC. In this role Marlin sets the product and service strategy for EMC's 30+ lines of business as they relate to Governance, Risk and Regulatory compliance.  Marlin is also the Co Chair of the Cloud Security Alliance Control Matrix WG and Co-Chair of the Shared Assessment WG, CloudAudit.  He holds a Ph.D. in computer science, MBA in technology management, holds a CISSP, CISA, CISM, and CGEIT & PMP. Marlin has published five texts on regulatory compliance.

 

 

 

 

Tune in February 22 at 3:00 PM PST to listen in on Allyson Klein & Reuven Cohen chat wiht Marlin on goevernance & infosec

 

Have questions now?  Feel free to comment here or share with @DigitalNibbles!

Please note: This blog post originally apeared as an industry perspective on Data Center Knowledge.

 

 

Among important data center industry milestones this year is the fifth anniversary of The Green Grid, the premier international consortium for resource-efficient IT. Formed by eleven founding member companies in 2007 the organization grew rapidly and today boasts approximately 150 General and Contributing and ten Board member companies. Since its formation the organization has contributed a tremendous amount to the data center “science” of efficiency.

 

Here is just a partial list of key results and contributions made by the Green Grid so far:

 

Harmonization of the PUE metric: Prior to the Green Grid there was no agreed standard to understand or compare the impact of infrastructure on data center efficiency. PUE is a great example of Peter Drucker’s adage, “What gets measured gets done.” Average reported PUEs have dropped from 2.2 in a 2006 LBNL study to 1.6 in a survey of TGG members in April 2011, a 50% reduction is overhead energy. In fact,  PUE was just adopted by ASHRAE (pending public review) into Std 90.1

 

The Green Grid Energy Star Project Management Office acts as a good-faith Industry Interface to the EPA for Energy Star Rating on data centers, servers, UPS’s and storage. The work done in the Green Grid has ironed out differences of opinion between industry members and, in my opinion, improved Energy Star by making it a user-relevant measure of efficiency. Several member companies have such confidence in the Green Grid’s work they decline to make individual company responses to the EPA.

 

Data centers use a lot of water and the Green Grid, again taking the forefront, has developed a water usage effectiveness standard, WUE to standardize the measurement of water usage, its reporting, and to encourage resource efficiency.

 

The Green Grid produced highly influential “Free Cooling” tools and maps to aid in data center site selection. The Green Grid has been among leading voices is advocating the use of free “outside air” and economizers for efficient cooling of data centers. Both approaches can substantially reduce the energy consumption of data centers compared to conventional reliance on air conditioners and air handlers alone. The maps have been downloaded more than 11,000 times in the last 2 years.

 

A Roadmap to the Adoption of Server Power Features. Published in 2010, it is one of the most (if not the most) comprehensive analyses available of server power management capability, how it is deployed, industry perception, and barriers to adoption. Strategic in nature, the study not only recommends concrete action today, but suggests future work to enable this fundamental aspect of data center efficiency.

 

The comprehensive Data Center Maturity Model, which helps data center operators quickly assess opportunities for greater sustainability and efficiency in their data center operations. Released just a year ago, it’s a popular invited talk at international conferences, not only for the results it promises today, but for the five year roadmap it lays out for the industry.

 

The Green Grid has facilitated international agreements with Japanese and European efficiency orgs, and with folks like ASHRAE, ODCA, and the Green500. Interested how “containers” affect data center efficiency? There’s a Green Grid Container Task Force on that.

 

But the story doesn’t stop there. Ongoing work will quantify software efficiency and develop Productivity Proxy’s to help measure data center work output in standardized and user-relevant ways. Further chapters of the Data Center Design Guide will]help provide guidance for those building new data centers . There are plans afoot in the Green Grid to develop IT recycling metrics and work is starting to focus on the role data centers can play in a “Smart Grid.”

 

In all, quite a list of accomplishments. So, if you want to learn more about what the Green Grid has accomplished in the last five years, how its work has contributed value to its member base, or have an interest in shaping the next five years of this exciting industry, please plan to attend the upcoming Green Grid Forum in San Jose, CA, March 6-7, 2012.

As I described in my last blog on scale out storage, I shared that scale storage is becoming synonymous with protocols such as iSCSI and NFS that can be used with regular GbE technology. Its main benefit is that it is much cheaper than legacy SAN.

 

Assuming that you are using or looking for storage connectivity based on CSMA/CD media access control, moving to fabric unification is almost a natural evolution. In a virtual environment where you typically have a consolidation ratio of 15-25:1, you will most likely see that a that at minimum, the total amount of bandwidth for each host must be the equivalent of the switch top-the-rack backplane, since we will have a virtual equivalent inside the hypervisor to support 15+ virtual machines plus management/maintenance activities.

 

In this it highly dense virtual environment, with 15-25+ virtual machines hosted on a single physical server, the network and storage traffic are concentrated and, in many cases, hosts must have as many as eight or more 1GbE interfaces plus two HBA interfaces to satisfy these requirements. Usually several port groups are configured to support the various networking functions and application grouping.

 

NetworkDiagramVirtualized.png

Adoption of a Unified Network, in order to allow mixing in the same media LAN and storage traffic using 10GbE, is the best way to increase flexibility – allocating bandwidth dynamically for storage and network, which is required for a cloud computing environment.

 

In order to archive high availability, each interface should be connected to different switches. Management traffic can be encapsulated in 10GbE or, in case CAT5/6 cabling is already in place, management traffic can also use the 1GbE interface available in the motherboard (i.e. LOM).

 

Even if you are convinced of the reliability of iSCSI and NFS, operating system and application compatibility, or if you just can’t throw more than one decade into SAN investment, Unified Networking is also available via FCoE (i.e. Fiber Channel over Ethernet).

 

Basically FCoE is an encapsulation of Fiber Channel frames over Ethernet networks. The key element in this technology is the switch, which must be FCoE compliant and support connection to SAN and LAN. Fiber channel frames usually use MTU of 9000bytes (i.e. Jumbo frame) while LAN use MTU of 1500bytes, so encapsulating requires some Ethernet protocol extension that was agreed to by the IEEE. The complete specification is available at the International Committee for Information Technology Standards.

 

FCoEandISCSI.PNG

 

Even for a non-virtualized environment, Unified Networking has many benefits for simplifying data center connectivity while at same time improving flexibility of resource allocation. In a few years, for a virtualized environment it will be the definitive solution.

 

 

 

Best Regards!

-Bruno Domingues

Right or wrong, businesses need to be profitable—at least the ones that plan to be around next year. Put another way, it’s more about “Show Me the Money” than most of us are comfortable admitting. It may sound odd, but the linked scene from the movie Jerry Maguire was the inspiration for my latest Data Center Knowledge blog. Here, I discuss my eighth and final fundamental truth of cloud computing: altruistic motives do not generally keep the lights on.

 

How we see the importance of a business being profitable is a matter of context. For example, I spent part of my career in enterprise IT shops where I was somewhat insulated from this concept. Every year, I was given a budget. In general, and beyond those elements tied to specific projects, I knew my overall budget was a percentage of overall revenue. How any of this led back to actual profit was something I honestly didn’t consider.

 

At this point in my career, and in polite company, I’m willing to speak of these times as my “blissful ignorance” phase. To be honest, though, I routinely had to resolve profit conflicts as part of contract disputes with the companies we hired to help us through numerous hardware upgrades and systems modernizations. (I was always amazed at how quickly a contractor could determine that whatever wasn’t working was absolutely not their problem.) At the end of the discussion, though, we were held accountable by the enterprise for delivery, so we got very good at resolving these disagreements.

 

As I moved beyond IT, the significance of profitability hit me the hardest when I worked for a start-up. Here, we not only had to worry about generating enough cash to make our bi-weekly payroll, but we also needed to generate enough extra revenue to convince investors that our business model was viable. Anyone who’s started a business knows the drill. And it’s here that my common-sense attitude toward profitability forever changed. (This was also when I gained weight and my hair started turning gray.)

 

So what does any of this have to do with a cloud ecosystem? In my column, I explain that every element of this ecosystem has different goals, most tied to profitability. This is the new order of things. I also explain that IT’s role in this ecosystem (assuming you have the opportunity) is to use lessons you probably learned in other areas to help reduce the pain that’s likely to be part of the new order.

 

I hope you find this blog interesting. I welcome your feedback, so please join the discussion. You’re welcome to contact me via LinkedIn or follow me on Twitter.

Sorry everyone!  Due to some unforseen items this episode has been cancelled.  Check back soon for what's next on Digital Nibbles.

 

 

dn_full_logo_web.png

Sorry everyone!  Due to some unforseen items this episode has been cancelled.  Check back soon for what's next on Digital Nibbles.

 

 

From Network Ops. to CIO what keeps you awake?  For most, it’s not whether or not they left the light on last night.  It’s the security of their system.  Moving up the stack from the every-day user to the data center that powers the company cloud, information security and governance are not just the tip of the iceberg, they are the iceberg.

 

Want the opportunity to chat with someone that deals with these concerns every day?  This week we have Marlin Pohlman to discuss these items and more.  Marlin’s world revolves around these topics so much so that he has earned the title Chief Governance Officer at EMC and plays a role in many of cloud security organizations.

 

 

 

Read more on Marlin:

 

Marlin_Pohlman.pngMarlin Pohlman is Chief Governance Officer at EMC. In this role Marlin sets the product and service strategy for EMC's 30+ lines of business as they relate to Governance, Risk and Regulatory compliance.  Marlin is also the Co Chair of the Cloud Security Alliance Control Matrix WG and Co-Chair of the Shared Assessment WG, CloudAudit.  He holds a Ph.D. in computer science, MBA in technology management, holds a CISSP, CISA, CISM, and CGEIT & PMP. Marlin has published five texts on regulatory compliance.

 

 

 

 

Tune in February 15 at 3:00 PM PST to listen in on Allyson Klein and Marlin talk goevernance & infosec

 

Have questions now?  Feel free to comment here or share with @DigitalNibbles!

If you’ve read any of my previous posts, it should be pretty clear that I think 10 Gigabit Ethernet (10GbE) is where the action is today. It’s growing (over one million server ports shipped in each quarter of 2011, and an estimated 90 percent growth vs. 2010(1)), big things are happening (you’ll see 10GBASE-T LAN on Motherboard connections soon), and 10GbE adapter ports are projected to outship GbE ports in the data center in 2014(2). In a recent article, Network World called 10GbE “perhaps the hottest growth segment of data center networking.” I agree, and clearly, there’s a lot to talk about when it comes to 10GbE.

 

If you follow networking, however, you’ve probably heard some discussion of 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE). Those are some big numbers. Are 40GbE and 100GbE just hype or is there a real need for this much bandwidth?

 

[For you wonks, the IEEE 802.3ba standard, which includes both 40 and 100GbE, was ratified in June 2001 and marked the first time two Ethernet speeds were defined in a single standard.]


For starters, let’s take a simplified (or maybe simplistic) look at a traditional data center architecture.

 

In a typical data center, servers (often dozens of them) connect to an access switch (or multiple switches for redundancy purposes), sometimes referred to as an “edge” switch, which resides in the lowest layer in a multi-tiered switching environment. Access switches, in turn, connect to upstream switches, typically called “aggregation” or “distribution” switches. These switches literally aggregate traffic from multiple access switches. Uplinks from an access switch to an aggregation switch often operate at a higher speed than the access switch’s base ports (the ones connecting to servers). This higher rate of speed requires fewer cables to connect the two switches, simplifying connectivity.

 

Similarly, connections from an aggregation switch to a core switch typically require the fastest available speeds. The core switch is the final aggregation point in the network and ensures traffic can get from one point to another as quickly as possible. You can think of it as the nerve center of a network.

 

These switch-to-switch links, access to aggregation and aggregation to core, are where we’ll see the bulk of 40 and 100GbE used in the near and medium term. Integrated 10GBASE-T connections on the next generation of Intel® Xeon® processor-based servers will bring 10GbE to the mainstream, and that means we’ll see many more servers connecting to the network using 10GbE. This, of course, will require more 10GbE switch ports, and getting all that traffic to the aggregation switches is going to need some big pipes – 40GbE and 100GbE pipes. Recent product announcements regarding cloud ready switches and product launches around cloud networking from companies such as Cisco are addressing these bandwidth needs.

 

And how about 40 or 100GbE server connectivity? The transition to 10GbE has been gradual (the original 802.3ae standard was published in 2002), but 10GbE is going mainstream now –  we’re even seeing systems shipping with up to four 10GbE ports. Clearly, servers need their bandwidth, and those needs will continue to grow. We anticipate a faster transition from 10GbE to 40GbE than we saw with GbE to 10GbE, and you’ll likely see 40GbE server connectivity adoption start in the next few years, coinciding with the next major server refresh.

 

40 and 100GbE are definitely for real, especially for infrastructure connectivity, and they’ll have their day. But like I said at the top, 10GbE is where the action is today – bandwidth needs are increasing, costs are coming down, and customers can choose from multiple 10GbE interface options to meet their needs. And stay tuned – there’s lots of good stuff coming this year.

 

For the latest information, follow us on Twitter: @IntelEthernet

 

 

 

1)     Crehan Research: Server-class Adapter and LOM Market, November 2011

2)     Crehan Research: Server-class Adapter and LOM - Long Range Forecast December 2011

dn_full_logo_hq.jpg

 

Everyone is clear that cloud computing is transforming data center technology, and I chat weekly about various aspects of this transformation on Digital Nibbles and Chip Chat.  It’s a rare treat to get to discuss the even broader transformation that cloud computing represents to the consumer of compute, and we were honored to have @HighTechDad (Michael Sheehan, a cloud evangelist from Go Grid who is also known as HighTechDad) chatting with @ruv (AKA Reuven Cohen, cloud computing guru and my more than capable co-host) and I about how we should expect cloud services to evolve in the coming years and how these services will shape what type computing experiences and devices we’ll be demanding.  While the time is ripe for data center innovation, there is also a massive business transformation underway where savvy entrepreneurs are able to deliver services to broad markets of computing users in ways that were not available before.  Check out our chat and stay tuned for our next livecast @ 3pm PST February 15th.

 

 

 

Michael Sheehan is based in the San Francisco Bay Area. He is an avid technologist, blogger, social media pundit husband and father.  Michael writes about technology, consumer electronics, gadgets, software, hardware, parenting "hacks," and other tips & tricks.  Professionally, Michael is the Technology Evangelist for GoGrid.  To learn more checkout his about/bio page on his blog.HTD_HeadShot.png

 

 

Want more updates?  Follow @DigitalNibbles

Download Now


cntv.jpgChina’s CNTV is a national TV broadcast organization and a global, multilingual, and multi-terminal public network video service platform. It focuses on interactive A/V content combining both the Internet and the TV network. CNTV needed to speed up average loading time for images from 70 milliseconds to 50 milliseconds to optimize user experience and improve service quality. More images require more servers, so it also needed to reduce server space requirements.


CNTV chose Intel® Xeon® processor 5600 series and Intel® Solid State Drives to double access to the image servers’ data and enable each server to accommodate three to four times the original user load.


“With outstanding data accessibility, Intel® SSDs can satisfy CNTV’s stringent requirements for image servers,” explained Bai Jian, executive director of the Operations System Department for CNTV. “Today, not only can each image server accommodate three to four times the original user loads, but data access is also faster."


Get all the details in our new CNTV business success story. As always, you can find more like this on the Intel.com Business Success Stories for IT Managers page.

Filter Blog

By author:
By date:
By tag: